00:00:00.001 Started by upstream project "autotest-per-patch" build number 132546 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.143 Using shallow fetch with depth 1 00:00:00.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.143 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.042 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.053 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.066 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.066 > git config core.sparsecheckout # timeout=10 00:00:04.076 > git read-tree -mu HEAD # timeout=10 00:00:04.090 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.115 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.115 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.202 [Pipeline] Start of Pipeline 00:00:04.220 [Pipeline] library 00:00:04.222 Loading library shm_lib@master 00:00:04.222 Library shm_lib@master is cached. Copying from home. 00:00:04.238 [Pipeline] node 00:00:04.249 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:04.250 [Pipeline] { 00:00:04.258 [Pipeline] catchError 00:00:04.259 [Pipeline] { 00:00:04.301 [Pipeline] wrap 00:00:04.311 [Pipeline] { 00:00:04.319 [Pipeline] stage 00:00:04.321 [Pipeline] { (Prologue) 00:00:04.344 [Pipeline] echo 00:00:04.346 Node: VM-host-SM17 00:00:04.352 [Pipeline] cleanWs 00:00:04.360 [WS-CLEANUP] Deleting project workspace... 00:00:04.360 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.365 [WS-CLEANUP] done 00:00:04.699 [Pipeline] setCustomBuildProperty 00:00:04.812 [Pipeline] httpRequest 00:00:05.104 [Pipeline] echo 00:00:05.104 Sorcerer 10.211.164.20 is alive 00:00:05.112 [Pipeline] retry 00:00:05.113 [Pipeline] { 00:00:05.122 [Pipeline] httpRequest 00:00:05.126 HttpMethod: GET 00:00:05.126 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.127 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.131 Response Code: HTTP/1.1 200 OK 00:00:05.131 Success: Status code 200 is in the accepted range: 200,404 00:00:05.131 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.571 [Pipeline] } 00:00:05.591 [Pipeline] // retry 00:00:05.599 [Pipeline] sh 00:00:05.879 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.893 [Pipeline] httpRequest 00:00:06.286 [Pipeline] echo 00:00:06.287 Sorcerer 10.211.164.20 is alive 00:00:06.296 [Pipeline] retry 00:00:06.298 [Pipeline] { 00:00:06.313 [Pipeline] httpRequest 00:00:06.317 HttpMethod: GET 00:00:06.317 URL: http://10.211.164.20/packages/spdk_67afc973b5304e3c0071a91bb37baa3fcd5bdf74.tar.gz 00:00:06.318 Sending request to url: http://10.211.164.20/packages/spdk_67afc973b5304e3c0071a91bb37baa3fcd5bdf74.tar.gz 00:00:06.329 Response Code: HTTP/1.1 200 OK 00:00:06.330 Success: Status code 200 is in the accepted range: 200,404 00:00:06.330 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_67afc973b5304e3c0071a91bb37baa3fcd5bdf74.tar.gz 00:01:11.207 [Pipeline] } 00:01:11.226 [Pipeline] // retry 00:01:11.234 [Pipeline] sh 00:01:11.575 + tar --no-same-owner -xf spdk_67afc973b5304e3c0071a91bb37baa3fcd5bdf74.tar.gz 00:01:14.875 [Pipeline] sh 00:01:15.159 + git -C spdk log --oneline -n5 00:01:15.159 67afc973b bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:15.159 16e5e505a bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:15.159 20b346609 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:01:15.159 2a91567e4 CHANGELOG.md: corrected typo 00:01:15.159 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:01:15.182 [Pipeline] writeFile 00:01:15.201 [Pipeline] sh 00:01:15.482 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.499 [Pipeline] sh 00:01:15.791 + cat autorun-spdk.conf 00:01:15.791 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.791 SPDK_TEST_NVMF=1 00:01:15.791 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.791 SPDK_TEST_URING=1 00:01:15.791 SPDK_TEST_USDT=1 00:01:15.791 SPDK_RUN_UBSAN=1 00:01:15.791 NET_TYPE=virt 00:01:15.791 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.798 RUN_NIGHTLY=0 00:01:15.801 [Pipeline] } 00:01:15.815 [Pipeline] // stage 00:01:15.832 [Pipeline] stage 00:01:15.835 [Pipeline] { (Run VM) 00:01:15.848 [Pipeline] sh 00:01:16.131 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.131 + echo 'Start stage prepare_nvme.sh' 00:01:16.131 Start stage prepare_nvme.sh 00:01:16.131 + [[ -n 7 ]] 00:01:16.131 + disk_prefix=ex7 00:01:16.131 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:01:16.131 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:01:16.131 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:01:16.131 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.131 ++ SPDK_TEST_NVMF=1 00:01:16.131 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.131 ++ SPDK_TEST_URING=1 00:01:16.131 ++ SPDK_TEST_USDT=1 00:01:16.131 ++ SPDK_RUN_UBSAN=1 00:01:16.131 ++ NET_TYPE=virt 00:01:16.131 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.131 ++ RUN_NIGHTLY=0 00:01:16.131 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:16.131 + nvme_files=() 00:01:16.131 + declare -A nvme_files 00:01:16.131 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.131 + nvme_files['nvme.img']=5G 00:01:16.131 + nvme_files['nvme-cmb.img']=5G 00:01:16.131 + nvme_files['nvme-multi0.img']=4G 00:01:16.131 + nvme_files['nvme-multi1.img']=4G 00:01:16.131 + nvme_files['nvme-multi2.img']=4G 00:01:16.131 + nvme_files['nvme-openstack.img']=8G 00:01:16.131 + nvme_files['nvme-zns.img']=5G 00:01:16.131 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.131 + (( SPDK_TEST_FTL == 1 )) 00:01:16.131 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.131 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.131 + for nvme in "${!nvme_files[@]}" 00:01:16.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:16.131 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.131 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:16.131 + echo 'End stage prepare_nvme.sh' 00:01:16.131 End stage prepare_nvme.sh 00:01:16.143 [Pipeline] sh 00:01:16.426 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.426 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:16.426 00:01:16.426 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:01:16.426 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:01:16.426 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:16.426 HELP=0 00:01:16.426 DRY_RUN=0 00:01:16.426 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:16.426 NVME_DISKS_TYPE=nvme,nvme, 00:01:16.426 NVME_AUTO_CREATE=0 00:01:16.426 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:16.426 NVME_CMB=,, 00:01:16.426 NVME_PMR=,, 00:01:16.426 NVME_ZNS=,, 00:01:16.426 NVME_MS=,, 00:01:16.426 NVME_FDP=,, 00:01:16.426 SPDK_VAGRANT_DISTRO=fedora39 00:01:16.426 SPDK_VAGRANT_VMCPU=10 00:01:16.426 SPDK_VAGRANT_VMRAM=12288 00:01:16.426 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.426 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.426 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.426 SPDK_OPENSTACK_NETWORK=0 00:01:16.426 VAGRANT_PACKAGE_BOX=0 00:01:16.426 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:16.426 FORCE_DISTRO=true 00:01:16.426 VAGRANT_BOX_VERSION= 00:01:16.426 EXTRA_VAGRANTFILES= 00:01:16.426 NIC_MODEL=e1000 00:01:16.426 00:01:16.426 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt' 00:01:16.426 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:01:19.716 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.975 ==> default: Creating image (snapshot of base box volume). 00:01:20.235 ==> default: Creating domain with the following settings... 00:01:20.235 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732648158_ed7fbbfabc3b379a4454 00:01:20.235 ==> default: -- Domain type: kvm 00:01:20.235 ==> default: -- Cpus: 10 00:01:20.235 ==> default: -- Feature: acpi 00:01:20.235 ==> default: -- Feature: apic 00:01:20.235 ==> default: -- Feature: pae 00:01:20.235 ==> default: -- Memory: 12288M 00:01:20.235 ==> default: -- Memory Backing: hugepages: 00:01:20.235 ==> default: -- Management MAC: 00:01:20.235 ==> default: -- Loader: 00:01:20.235 ==> default: -- Nvram: 00:01:20.235 ==> default: -- Base box: spdk/fedora39 00:01:20.235 ==> default: -- Storage pool: default 00:01:20.235 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732648158_ed7fbbfabc3b379a4454.img (20G) 00:01:20.235 ==> default: -- Volume Cache: default 00:01:20.235 ==> default: -- Kernel: 00:01:20.235 ==> default: -- Initrd: 00:01:20.235 ==> default: -- Graphics Type: vnc 00:01:20.235 ==> default: -- Graphics Port: -1 00:01:20.235 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.235 ==> default: -- Graphics Password: Not defined 00:01:20.235 ==> default: -- Video Type: cirrus 00:01:20.235 ==> default: -- Video VRAM: 9216 00:01:20.235 ==> default: -- Sound Type: 00:01:20.235 ==> default: -- Keymap: en-us 00:01:20.235 ==> default: -- TPM Path: 00:01:20.235 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.235 ==> default: -- Command line args: 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.235 ==> default: -> value=-drive, 00:01:20.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.235 ==> default: -> value=-drive, 00:01:20.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.235 ==> default: -> value=-drive, 00:01:20.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.235 ==> default: -> value=-drive, 00:01:20.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:20.235 ==> default: -> value=-device, 00:01:20.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.235 ==> default: Creating shared folders metadata... 00:01:20.235 ==> default: Starting domain. 00:01:22.139 ==> default: Waiting for domain to get an IP address... 00:01:40.326 ==> default: Waiting for SSH to become available... 00:01:41.264 ==> default: Configuring and enabling network interfaces... 00:01:45.575 default: SSH address: 192.168.121.70:22 00:01:45.575 default: SSH username: vagrant 00:01:45.575 default: SSH auth method: private key 00:01:48.109 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.228 ==> default: Mounting SSHFS shared folder... 00:01:57.164 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.164 ==> default: Checking Mount.. 00:01:58.541 ==> default: Folder Successfully Mounted! 00:01:58.541 ==> default: Running provisioner: file... 00:01:59.479 default: ~/.gitconfig => .gitconfig 00:01:59.738 00:01:59.738 SUCCESS! 00:01:59.738 00:01:59.738 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:59.738 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.738 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:59.738 00:01:59.747 [Pipeline] } 00:01:59.764 [Pipeline] // stage 00:01:59.773 [Pipeline] dir 00:01:59.774 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt 00:01:59.776 [Pipeline] { 00:01:59.790 [Pipeline] catchError 00:01:59.792 [Pipeline] { 00:01:59.807 [Pipeline] sh 00:02:00.089 + vagrant ssh-config --host vagrant 00:02:00.089 + sed -ne /^Host/,$p 00:02:00.089 + tee ssh_conf 00:02:03.377 Host vagrant 00:02:03.377 HostName 192.168.121.70 00:02:03.377 User vagrant 00:02:03.377 Port 22 00:02:03.377 UserKnownHostsFile /dev/null 00:02:03.377 StrictHostKeyChecking no 00:02:03.377 PasswordAuthentication no 00:02:03.377 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.377 IdentitiesOnly yes 00:02:03.377 LogLevel FATAL 00:02:03.377 ForwardAgent yes 00:02:03.377 ForwardX11 yes 00:02:03.377 00:02:03.391 [Pipeline] withEnv 00:02:03.394 [Pipeline] { 00:02:03.410 [Pipeline] sh 00:02:03.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.691 source /etc/os-release 00:02:03.691 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.691 # Minimal, systemd-like check. 00:02:03.691 if [[ -e /.dockerenv ]]; then 00:02:03.691 # Clear garbage from the node's name: 00:02:03.691 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.691 # $HOSTNAME is the actual container id 00:02:03.691 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.691 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.691 # We can assume this is a mount from a host where container is running, 00:02:03.691 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.691 container="$(< /etc/hostname) ($agent)" 00:02:03.691 else 00:02:03.691 # Fallback 00:02:03.691 container=$agent 00:02:03.691 fi 00:02:03.691 fi 00:02:03.691 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.691 00:02:03.962 [Pipeline] } 00:02:03.978 [Pipeline] // withEnv 00:02:03.985 [Pipeline] setCustomBuildProperty 00:02:03.999 [Pipeline] stage 00:02:04.001 [Pipeline] { (Tests) 00:02:04.048 [Pipeline] sh 00:02:04.330 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.345 [Pipeline] sh 00:02:04.629 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.903 [Pipeline] timeout 00:02:04.903 Timeout set to expire in 1 hr 0 min 00:02:04.906 [Pipeline] { 00:02:04.923 [Pipeline] sh 00:02:05.206 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:05.774 HEAD is now at 67afc973b bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:02:05.787 [Pipeline] sh 00:02:06.069 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.344 [Pipeline] sh 00:02:06.623 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.952 [Pipeline] sh 00:02:07.230 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:07.489 ++ readlink -f spdk_repo 00:02:07.489 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.489 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.489 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.489 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.489 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.489 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.489 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.489 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:07.489 + cd /home/vagrant/spdk_repo 00:02:07.489 + source /etc/os-release 00:02:07.489 ++ NAME='Fedora Linux' 00:02:07.489 ++ VERSION='39 (Cloud Edition)' 00:02:07.489 ++ ID=fedora 00:02:07.489 ++ VERSION_ID=39 00:02:07.489 ++ VERSION_CODENAME= 00:02:07.489 ++ PLATFORM_ID=platform:f39 00:02:07.489 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.489 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.489 ++ LOGO=fedora-logo-icon 00:02:07.489 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.489 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.489 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.489 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.489 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.489 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.489 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.489 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.489 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.489 ++ SUPPORT_END=2024-11-12 00:02:07.489 ++ VARIANT='Cloud Edition' 00:02:07.489 ++ VARIANT_ID=cloud 00:02:07.489 + uname -a 00:02:07.489 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.489 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:08.059 Hugepages 00:02:08.059 node hugesize free / total 00:02:08.059 node0 1048576kB 0 / 0 00:02:08.059 node0 2048kB 0 / 0 00:02:08.059 00:02:08.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.059 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.059 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.059 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:08.059 + rm -f /tmp/spdk-ld-path 00:02:08.059 + source autorun-spdk.conf 00:02:08.059 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.059 ++ SPDK_TEST_NVMF=1 00:02:08.059 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.059 ++ SPDK_TEST_URING=1 00:02:08.059 ++ SPDK_TEST_USDT=1 00:02:08.059 ++ SPDK_RUN_UBSAN=1 00:02:08.059 ++ NET_TYPE=virt 00:02:08.059 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.059 ++ RUN_NIGHTLY=0 00:02:08.059 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.059 + [[ -n '' ]] 00:02:08.059 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.059 + for M in /var/spdk/build-*-manifest.txt 00:02:08.059 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.059 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.059 + for M in /var/spdk/build-*-manifest.txt 00:02:08.059 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.059 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.059 + for M in /var/spdk/build-*-manifest.txt 00:02:08.059 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.059 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.059 ++ uname 00:02:08.059 + [[ Linux == \L\i\n\u\x ]] 00:02:08.059 + sudo dmesg -T 00:02:08.059 + sudo dmesg --clear 00:02:08.059 + dmesg_pid=5195 00:02:08.059 + [[ Fedora Linux == FreeBSD ]] 00:02:08.059 + sudo dmesg -Tw 00:02:08.059 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.059 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.059 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.059 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.059 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.059 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.059 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.059 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.059 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.059 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.059 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.059 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.059 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.059 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.059 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.059 19:10:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.059 19:10:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.059 19:10:06 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:08.059 19:10:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:08.059 19:10:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.319 19:10:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.319 19:10:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.319 19:10:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.319 19:10:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.319 19:10:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.319 19:10:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.319 19:10:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.319 19:10:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.319 19:10:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.319 19:10:06 -- paths/export.sh@5 -- $ export PATH 00:02:08.319 19:10:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.319 19:10:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.319 19:10:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:08.319 19:10:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732648206.XXXXXX 00:02:08.319 19:10:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732648206.8Cy06Z 00:02:08.319 19:10:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:08.319 19:10:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:08.319 19:10:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.319 19:10:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.319 19:10:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.319 19:10:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:08.319 19:10:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:08.319 19:10:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.320 19:10:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:08.320 19:10:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:08.320 19:10:06 -- pm/common@17 -- $ local monitor 00:02:08.320 19:10:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.320 19:10:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.320 19:10:06 -- pm/common@25 -- $ sleep 1 00:02:08.320 19:10:06 -- pm/common@21 -- $ date +%s 00:02:08.320 19:10:06 -- pm/common@21 -- $ date +%s 00:02:08.320 19:10:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732648206 00:02:08.320 19:10:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732648206 00:02:08.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732648206_collect-cpu-load.pm.log 00:02:08.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732648206_collect-vmstat.pm.log 00:02:09.257 19:10:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:09.257 19:10:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.257 19:10:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.257 19:10:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.257 19:10:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.257 Tue Nov 26 07:10:07 PM UTC 2024 00:02:09.257 19:10:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.257 v25.01-pre-243-g67afc973b 00:02:09.257 19:10:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.257 19:10:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.257 19:10:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.257 19:10:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.257 19:10:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.257 19:10:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.257 ************************************ 00:02:09.257 START TEST ubsan 00:02:09.257 ************************************ 00:02:09.257 using ubsan 00:02:09.257 19:10:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:09.257 00:02:09.257 real 0m0.000s 00:02:09.257 user 0m0.000s 00:02:09.257 sys 0m0.000s 00:02:09.257 19:10:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.257 ************************************ 00:02:09.257 END TEST ubsan 00:02:09.257 19:10:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.257 ************************************ 00:02:09.257 19:10:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.257 19:10:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.257 19:10:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.257 19:10:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:09.517 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.517 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.776 Using 'verbs' RDMA provider 00:02:23.368 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:38.280 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:38.280 Creating mk/config.mk...done. 00:02:38.280 Creating mk/cc.flags.mk...done. 00:02:38.280 Type 'make' to build. 00:02:38.280 19:10:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:38.280 19:10:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:38.280 19:10:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:38.280 19:10:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.280 ************************************ 00:02:38.280 START TEST make 00:02:38.280 ************************************ 00:02:38.280 19:10:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:38.280 make[1]: Nothing to be done for 'all'. 00:02:50.548 The Meson build system 00:02:50.548 Version: 1.5.0 00:02:50.548 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:50.548 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.548 Build type: native build 00:02:50.548 Program cat found: YES (/usr/bin/cat) 00:02:50.548 Project name: DPDK 00:02:50.548 Project version: 24.03.0 00:02:50.548 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.548 C linker for the host machine: cc ld.bfd 2.40-14 00:02:50.548 Host machine cpu family: x86_64 00:02:50.548 Host machine cpu: x86_64 00:02:50.548 Message: ## Building in Developer Mode ## 00:02:50.548 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.548 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.548 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.548 Program python3 found: YES (/usr/bin/python3) 00:02:50.548 Program cat found: YES (/usr/bin/cat) 00:02:50.548 Compiler for C supports arguments -march=native: YES 00:02:50.548 Checking for size of "void *" : 8 00:02:50.548 Checking for size of "void *" : 8 (cached) 00:02:50.548 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:50.548 Library m found: YES 00:02:50.548 Library numa found: YES 00:02:50.548 Has header "numaif.h" : YES 00:02:50.548 Library fdt found: NO 00:02:50.548 Library execinfo found: NO 00:02:50.548 Has header "execinfo.h" : YES 00:02:50.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.548 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.548 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.548 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.548 Run-time dependency openssl found: YES 3.1.1 00:02:50.548 Run-time dependency libpcap found: YES 1.10.4 00:02:50.548 Has header "pcap.h" with dependency libpcap: YES 00:02:50.548 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.548 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.548 Compiler for C supports arguments -Wformat: YES 00:02:50.548 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.548 Compiler for C supports arguments -Wformat-security: NO 00:02:50.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.548 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.548 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.548 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.548 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.548 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.548 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.548 Compiler for C supports arguments -Wundef: YES 00:02:50.548 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.548 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.548 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.548 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.548 Program objdump found: YES (/usr/bin/objdump) 00:02:50.548 Compiler for C supports arguments -mavx512f: YES 00:02:50.548 Checking if "AVX512 checking" compiles: YES 00:02:50.548 Fetching value of define "__SSE4_2__" : 1 00:02:50.548 Fetching value of define "__AES__" : 1 00:02:50.548 Fetching value of define "__AVX__" : 1 00:02:50.548 Fetching value of define "__AVX2__" : 1 00:02:50.548 Fetching value of define "__AVX512BW__" : (undefined) 00:02:50.548 Fetching value of define "__AVX512CD__" : (undefined) 00:02:50.548 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:50.548 Fetching value of define "__AVX512F__" : (undefined) 00:02:50.548 Fetching value of define "__AVX512VL__" : (undefined) 00:02:50.548 Fetching value of define "__PCLMUL__" : 1 00:02:50.548 Fetching value of define "__RDRND__" : 1 00:02:50.548 Fetching value of define "__RDSEED__" : 1 00:02:50.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.548 Fetching value of define "__znver1__" : (undefined) 00:02:50.548 Fetching value of define "__znver2__" : (undefined) 00:02:50.548 Fetching value of define "__znver3__" : (undefined) 00:02:50.548 Fetching value of define "__znver4__" : (undefined) 00:02:50.548 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.548 Message: lib/log: Defining dependency "log" 00:02:50.548 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.548 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.548 Checking for function "getentropy" : NO 00:02:50.548 Message: lib/eal: Defining dependency "eal" 00:02:50.548 Message: lib/ring: Defining dependency "ring" 00:02:50.548 Message: lib/rcu: Defining dependency "rcu" 00:02:50.548 Message: lib/mempool: Defining dependency "mempool" 00:02:50.548 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.548 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.548 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:50.548 Compiler for C supports arguments -mpclmul: YES 00:02:50.548 Compiler for C supports arguments -maes: YES 00:02:50.548 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.548 Compiler for C supports arguments -mavx512bw: YES 00:02:50.548 Compiler for C supports arguments -mavx512dq: YES 00:02:50.548 Compiler for C supports arguments -mavx512vl: YES 00:02:50.548 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.548 Compiler for C supports arguments -mavx2: YES 00:02:50.548 Compiler for C supports arguments -mavx: YES 00:02:50.548 Message: lib/net: Defining dependency "net" 00:02:50.548 Message: lib/meter: Defining dependency "meter" 00:02:50.548 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.548 Message: lib/pci: Defining dependency "pci" 00:02:50.548 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.548 Message: lib/hash: Defining dependency "hash" 00:02:50.549 Message: lib/timer: Defining dependency "timer" 00:02:50.549 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.549 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.549 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.549 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.549 Message: lib/power: Defining dependency "power" 00:02:50.549 Message: lib/reorder: Defining dependency "reorder" 00:02:50.549 Message: lib/security: Defining dependency "security" 00:02:50.549 Has header "linux/userfaultfd.h" : YES 00:02:50.549 Has header "linux/vduse.h" : YES 00:02:50.549 Message: lib/vhost: Defining dependency "vhost" 00:02:50.549 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.549 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.549 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.549 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.549 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.549 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.549 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.549 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.549 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.549 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.549 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:50.549 Configuring doxy-api-html.conf using configuration 00:02:50.549 Configuring doxy-api-man.conf using configuration 00:02:50.549 Program mandb found: YES (/usr/bin/mandb) 00:02:50.549 Program sphinx-build found: NO 00:02:50.549 Configuring rte_build_config.h using configuration 00:02:50.549 Message: 00:02:50.549 ================= 00:02:50.549 Applications Enabled 00:02:50.549 ================= 00:02:50.549 00:02:50.549 apps: 00:02:50.549 00:02:50.549 00:02:50.549 Message: 00:02:50.549 ================= 00:02:50.549 Libraries Enabled 00:02:50.549 ================= 00:02:50.549 00:02:50.549 libs: 00:02:50.549 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.549 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.549 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.549 00:02:50.549 Message: 00:02:50.549 =============== 00:02:50.549 Drivers Enabled 00:02:50.549 =============== 00:02:50.549 00:02:50.549 common: 00:02:50.549 00:02:50.549 bus: 00:02:50.549 pci, vdev, 00:02:50.549 mempool: 00:02:50.549 ring, 00:02:50.549 dma: 00:02:50.549 00:02:50.549 net: 00:02:50.549 00:02:50.549 crypto: 00:02:50.549 00:02:50.549 compress: 00:02:50.549 00:02:50.549 vdpa: 00:02:50.549 00:02:50.549 00:02:50.549 Message: 00:02:50.549 ================= 00:02:50.549 Content Skipped 00:02:50.549 ================= 00:02:50.549 00:02:50.549 apps: 00:02:50.549 dumpcap: explicitly disabled via build config 00:02:50.549 graph: explicitly disabled via build config 00:02:50.549 pdump: explicitly disabled via build config 00:02:50.549 proc-info: explicitly disabled via build config 00:02:50.549 test-acl: explicitly disabled via build config 00:02:50.549 test-bbdev: explicitly disabled via build config 00:02:50.549 test-cmdline: explicitly disabled via build config 00:02:50.549 test-compress-perf: explicitly disabled via build config 00:02:50.549 test-crypto-perf: explicitly disabled via build config 00:02:50.549 test-dma-perf: explicitly disabled via build config 00:02:50.549 test-eventdev: explicitly disabled via build config 00:02:50.549 test-fib: explicitly disabled via build config 00:02:50.549 test-flow-perf: explicitly disabled via build config 00:02:50.549 test-gpudev: explicitly disabled via build config 00:02:50.549 test-mldev: explicitly disabled via build config 00:02:50.549 test-pipeline: explicitly disabled via build config 00:02:50.549 test-pmd: explicitly disabled via build config 00:02:50.549 test-regex: explicitly disabled via build config 00:02:50.549 test-sad: explicitly disabled via build config 00:02:50.549 test-security-perf: explicitly disabled via build config 00:02:50.549 00:02:50.549 libs: 00:02:50.549 argparse: explicitly disabled via build config 00:02:50.549 metrics: explicitly disabled via build config 00:02:50.549 acl: explicitly disabled via build config 00:02:50.549 bbdev: explicitly disabled via build config 00:02:50.549 bitratestats: explicitly disabled via build config 00:02:50.549 bpf: explicitly disabled via build config 00:02:50.549 cfgfile: explicitly disabled via build config 00:02:50.549 distributor: explicitly disabled via build config 00:02:50.549 efd: explicitly disabled via build config 00:02:50.549 eventdev: explicitly disabled via build config 00:02:50.549 dispatcher: explicitly disabled via build config 00:02:50.549 gpudev: explicitly disabled via build config 00:02:50.549 gro: explicitly disabled via build config 00:02:50.549 gso: explicitly disabled via build config 00:02:50.549 ip_frag: explicitly disabled via build config 00:02:50.549 jobstats: explicitly disabled via build config 00:02:50.549 latencystats: explicitly disabled via build config 00:02:50.549 lpm: explicitly disabled via build config 00:02:50.549 member: explicitly disabled via build config 00:02:50.549 pcapng: explicitly disabled via build config 00:02:50.549 rawdev: explicitly disabled via build config 00:02:50.549 regexdev: explicitly disabled via build config 00:02:50.549 mldev: explicitly disabled via build config 00:02:50.549 rib: explicitly disabled via build config 00:02:50.549 sched: explicitly disabled via build config 00:02:50.549 stack: explicitly disabled via build config 00:02:50.549 ipsec: explicitly disabled via build config 00:02:50.549 pdcp: explicitly disabled via build config 00:02:50.549 fib: explicitly disabled via build config 00:02:50.549 port: explicitly disabled via build config 00:02:50.549 pdump: explicitly disabled via build config 00:02:50.549 table: explicitly disabled via build config 00:02:50.549 pipeline: explicitly disabled via build config 00:02:50.549 graph: explicitly disabled via build config 00:02:50.549 node: explicitly disabled via build config 00:02:50.549 00:02:50.549 drivers: 00:02:50.549 common/cpt: not in enabled drivers build config 00:02:50.549 common/dpaax: not in enabled drivers build config 00:02:50.549 common/iavf: not in enabled drivers build config 00:02:50.549 common/idpf: not in enabled drivers build config 00:02:50.549 common/ionic: not in enabled drivers build config 00:02:50.549 common/mvep: not in enabled drivers build config 00:02:50.549 common/octeontx: not in enabled drivers build config 00:02:50.549 bus/auxiliary: not in enabled drivers build config 00:02:50.549 bus/cdx: not in enabled drivers build config 00:02:50.549 bus/dpaa: not in enabled drivers build config 00:02:50.549 bus/fslmc: not in enabled drivers build config 00:02:50.549 bus/ifpga: not in enabled drivers build config 00:02:50.549 bus/platform: not in enabled drivers build config 00:02:50.549 bus/uacce: not in enabled drivers build config 00:02:50.549 bus/vmbus: not in enabled drivers build config 00:02:50.549 common/cnxk: not in enabled drivers build config 00:02:50.549 common/mlx5: not in enabled drivers build config 00:02:50.549 common/nfp: not in enabled drivers build config 00:02:50.549 common/nitrox: not in enabled drivers build config 00:02:50.549 common/qat: not in enabled drivers build config 00:02:50.549 common/sfc_efx: not in enabled drivers build config 00:02:50.549 mempool/bucket: not in enabled drivers build config 00:02:50.549 mempool/cnxk: not in enabled drivers build config 00:02:50.549 mempool/dpaa: not in enabled drivers build config 00:02:50.549 mempool/dpaa2: not in enabled drivers build config 00:02:50.549 mempool/octeontx: not in enabled drivers build config 00:02:50.549 mempool/stack: not in enabled drivers build config 00:02:50.549 dma/cnxk: not in enabled drivers build config 00:02:50.549 dma/dpaa: not in enabled drivers build config 00:02:50.549 dma/dpaa2: not in enabled drivers build config 00:02:50.549 dma/hisilicon: not in enabled drivers build config 00:02:50.549 dma/idxd: not in enabled drivers build config 00:02:50.549 dma/ioat: not in enabled drivers build config 00:02:50.549 dma/skeleton: not in enabled drivers build config 00:02:50.549 net/af_packet: not in enabled drivers build config 00:02:50.549 net/af_xdp: not in enabled drivers build config 00:02:50.549 net/ark: not in enabled drivers build config 00:02:50.549 net/atlantic: not in enabled drivers build config 00:02:50.549 net/avp: not in enabled drivers build config 00:02:50.549 net/axgbe: not in enabled drivers build config 00:02:50.549 net/bnx2x: not in enabled drivers build config 00:02:50.549 net/bnxt: not in enabled drivers build config 00:02:50.549 net/bonding: not in enabled drivers build config 00:02:50.549 net/cnxk: not in enabled drivers build config 00:02:50.549 net/cpfl: not in enabled drivers build config 00:02:50.549 net/cxgbe: not in enabled drivers build config 00:02:50.549 net/dpaa: not in enabled drivers build config 00:02:50.549 net/dpaa2: not in enabled drivers build config 00:02:50.549 net/e1000: not in enabled drivers build config 00:02:50.549 net/ena: not in enabled drivers build config 00:02:50.549 net/enetc: not in enabled drivers build config 00:02:50.549 net/enetfec: not in enabled drivers build config 00:02:50.549 net/enic: not in enabled drivers build config 00:02:50.549 net/failsafe: not in enabled drivers build config 00:02:50.549 net/fm10k: not in enabled drivers build config 00:02:50.549 net/gve: not in enabled drivers build config 00:02:50.549 net/hinic: not in enabled drivers build config 00:02:50.549 net/hns3: not in enabled drivers build config 00:02:50.549 net/i40e: not in enabled drivers build config 00:02:50.549 net/iavf: not in enabled drivers build config 00:02:50.549 net/ice: not in enabled drivers build config 00:02:50.549 net/idpf: not in enabled drivers build config 00:02:50.549 net/igc: not in enabled drivers build config 00:02:50.549 net/ionic: not in enabled drivers build config 00:02:50.549 net/ipn3ke: not in enabled drivers build config 00:02:50.549 net/ixgbe: not in enabled drivers build config 00:02:50.549 net/mana: not in enabled drivers build config 00:02:50.549 net/memif: not in enabled drivers build config 00:02:50.549 net/mlx4: not in enabled drivers build config 00:02:50.549 net/mlx5: not in enabled drivers build config 00:02:50.549 net/mvneta: not in enabled drivers build config 00:02:50.549 net/mvpp2: not in enabled drivers build config 00:02:50.549 net/netvsc: not in enabled drivers build config 00:02:50.550 net/nfb: not in enabled drivers build config 00:02:50.550 net/nfp: not in enabled drivers build config 00:02:50.550 net/ngbe: not in enabled drivers build config 00:02:50.550 net/null: not in enabled drivers build config 00:02:50.550 net/octeontx: not in enabled drivers build config 00:02:50.550 net/octeon_ep: not in enabled drivers build config 00:02:50.550 net/pcap: not in enabled drivers build config 00:02:50.550 net/pfe: not in enabled drivers build config 00:02:50.550 net/qede: not in enabled drivers build config 00:02:50.550 net/ring: not in enabled drivers build config 00:02:50.550 net/sfc: not in enabled drivers build config 00:02:50.550 net/softnic: not in enabled drivers build config 00:02:50.550 net/tap: not in enabled drivers build config 00:02:50.550 net/thunderx: not in enabled drivers build config 00:02:50.550 net/txgbe: not in enabled drivers build config 00:02:50.550 net/vdev_netvsc: not in enabled drivers build config 00:02:50.550 net/vhost: not in enabled drivers build config 00:02:50.550 net/virtio: not in enabled drivers build config 00:02:50.550 net/vmxnet3: not in enabled drivers build config 00:02:50.550 raw/*: missing internal dependency, "rawdev" 00:02:50.550 crypto/armv8: not in enabled drivers build config 00:02:50.550 crypto/bcmfs: not in enabled drivers build config 00:02:50.550 crypto/caam_jr: not in enabled drivers build config 00:02:50.550 crypto/ccp: not in enabled drivers build config 00:02:50.550 crypto/cnxk: not in enabled drivers build config 00:02:50.550 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.550 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.550 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.550 crypto/mlx5: not in enabled drivers build config 00:02:50.550 crypto/mvsam: not in enabled drivers build config 00:02:50.550 crypto/nitrox: not in enabled drivers build config 00:02:50.550 crypto/null: not in enabled drivers build config 00:02:50.550 crypto/octeontx: not in enabled drivers build config 00:02:50.550 crypto/openssl: not in enabled drivers build config 00:02:50.550 crypto/scheduler: not in enabled drivers build config 00:02:50.550 crypto/uadk: not in enabled drivers build config 00:02:50.550 crypto/virtio: not in enabled drivers build config 00:02:50.550 compress/isal: not in enabled drivers build config 00:02:50.550 compress/mlx5: not in enabled drivers build config 00:02:50.550 compress/nitrox: not in enabled drivers build config 00:02:50.550 compress/octeontx: not in enabled drivers build config 00:02:50.550 compress/zlib: not in enabled drivers build config 00:02:50.550 regex/*: missing internal dependency, "regexdev" 00:02:50.550 ml/*: missing internal dependency, "mldev" 00:02:50.550 vdpa/ifc: not in enabled drivers build config 00:02:50.550 vdpa/mlx5: not in enabled drivers build config 00:02:50.550 vdpa/nfp: not in enabled drivers build config 00:02:50.550 vdpa/sfc: not in enabled drivers build config 00:02:50.550 event/*: missing internal dependency, "eventdev" 00:02:50.550 baseband/*: missing internal dependency, "bbdev" 00:02:50.550 gpu/*: missing internal dependency, "gpudev" 00:02:50.550 00:02:50.550 00:02:50.550 Build targets in project: 85 00:02:50.550 00:02:50.550 DPDK 24.03.0 00:02:50.550 00:02:50.550 User defined options 00:02:50.550 buildtype : debug 00:02:50.550 default_library : shared 00:02:50.550 libdir : lib 00:02:50.550 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.550 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.550 c_link_args : 00:02:50.550 cpu_instruction_set: native 00:02:50.550 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.550 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.550 enable_docs : false 00:02:50.550 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:50.550 enable_kmods : false 00:02:50.550 max_lcores : 128 00:02:50.550 tests : false 00:02:50.550 00:02:50.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.550 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.550 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.550 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.550 [3/268] Linking static target lib/librte_kvargs.a 00:02:50.550 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.550 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.550 [6/268] Linking static target lib/librte_log.a 00:02:50.550 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.550 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.550 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.550 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.550 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.550 [12/268] Linking static target lib/librte_telemetry.a 00:02:50.550 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.807 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.807 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.807 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.807 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.807 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.065 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.065 [20/268] Linking target lib/librte_log.so.24.1 00:02:51.324 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.324 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.324 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.582 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.582 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.582 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.582 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.582 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.582 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.582 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.582 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.582 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.840 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.840 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.840 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.840 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:52.099 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.358 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.358 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.358 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.358 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:52.617 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.617 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.617 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.617 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.617 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.617 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:52.617 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:52.875 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:52.875 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.134 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.393 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.393 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.393 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.393 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.393 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.652 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.652 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.652 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.652 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:53.652 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.911 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.168 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.168 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.168 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.440 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.440 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.441 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.713 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.713 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.713 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.713 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.972 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.972 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.972 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.972 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.231 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.231 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.231 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.490 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.490 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.749 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.749 [83/268] Linking static target lib/librte_ring.a 00:02:55.749 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.749 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.749 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.749 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.749 [88/268] Linking static target lib/librte_rcu.a 00:02:55.749 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.009 [90/268] Linking static target lib/librte_eal.a 00:02:56.009 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.009 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.268 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.268 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.268 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.268 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.268 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.268 [98/268] Linking static target lib/librte_mempool.a 00:02:56.527 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.527 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.527 [101/268] Linking static target lib/librte_mbuf.a 00:02:56.527 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.527 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:56.786 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.786 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.045 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.045 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.045 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.045 [109/268] Linking static target lib/librte_net.a 00:02:57.305 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.305 [111/268] Linking static target lib/librte_meter.a 00:02:57.305 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.564 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.564 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.564 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.564 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.564 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.564 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.822 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.389 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.389 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.389 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.648 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.648 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.648 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.648 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.648 [127/268] Linking static target lib/librte_pci.a 00:02:58.907 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.907 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.907 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.907 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.907 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.907 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.907 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.165 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.165 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.165 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.165 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.165 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.165 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.165 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.165 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.165 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.165 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:59.165 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.423 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.423 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.423 [148/268] Linking static target lib/librte_ethdev.a 00:02:59.682 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.682 [150/268] Linking static target lib/librte_cmdline.a 00:02:59.682 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:59.941 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.941 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.249 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.249 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.249 [156/268] Linking static target lib/librte_timer.a 00:03:00.249 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.249 [158/268] Linking static target lib/librte_hash.a 00:03:00.249 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.507 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.507 [161/268] Linking static target lib/librte_compressdev.a 00:03:00.507 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.766 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.766 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.766 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.024 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.024 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.024 [168/268] Linking static target lib/librte_dmadev.a 00:03:01.282 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.282 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.282 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.282 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.282 [173/268] Linking static target lib/librte_cryptodev.a 00:03:01.282 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.282 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.542 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.542 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.801 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.801 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.801 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.060 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.060 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.060 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.060 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.319 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.319 [186/268] Linking static target lib/librte_power.a 00:03:02.578 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.578 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.578 [189/268] Linking static target lib/librte_security.a 00:03:02.578 [190/268] Linking static target lib/librte_reorder.a 00:03:02.837 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.837 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.837 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.096 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.355 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.355 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.614 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.614 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.614 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.614 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.873 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.873 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.132 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.132 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.132 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.391 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.391 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.391 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.391 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.649 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.649 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.649 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.908 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.908 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.908 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.908 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:04.908 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.908 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.908 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.908 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:04.908 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.908 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.167 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.167 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.167 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.167 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.167 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.426 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.022 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.022 [230/268] Linking static target lib/librte_vhost.a 00:03:06.588 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.588 [232/268] Linking target lib/librte_eal.so.24.1 00:03:06.846 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:06.846 [234/268] Linking target lib/librte_meter.so.24.1 00:03:06.846 [235/268] Linking target lib/librte_pci.so.24.1 00:03:06.846 [236/268] Linking target lib/librte_ring.so.24.1 00:03:06.846 [237/268] Linking target lib/librte_timer.so.24.1 00:03:06.846 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:06.846 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:07.103 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.103 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.103 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.103 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.103 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:07.103 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.103 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:07.103 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.103 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.103 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.103 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.361 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.361 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:07.361 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.361 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.361 [255/268] Linking target lib/librte_net.so.24.1 00:03:07.361 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:07.361 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:07.361 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:07.619 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:07.619 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:07.619 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:07.619 [262/268] Linking target lib/librte_hash.so.24.1 00:03:07.619 [263/268] Linking target lib/librte_security.so.24.1 00:03:07.619 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:07.876 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:07.876 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:07.876 [267/268] Linking target lib/librte_power.so.24.1 00:03:07.876 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:07.876 INFO: autodetecting backend as ninja 00:03:07.876 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:34.417 CC lib/log/log.o 00:03:34.417 CC lib/ut/ut.o 00:03:34.417 CC lib/log/log_deprecated.o 00:03:34.417 CC lib/log/log_flags.o 00:03:34.417 CC lib/ut_mock/mock.o 00:03:34.417 LIB libspdk_ut.a 00:03:34.417 LIB libspdk_ut_mock.a 00:03:34.417 LIB libspdk_log.a 00:03:34.417 SO libspdk_ut_mock.so.6.0 00:03:34.417 SO libspdk_ut.so.2.0 00:03:34.417 SO libspdk_log.so.7.1 00:03:34.417 SYMLINK libspdk_ut_mock.so 00:03:34.417 SYMLINK libspdk_ut.so 00:03:34.417 SYMLINK libspdk_log.so 00:03:34.417 CXX lib/trace_parser/trace.o 00:03:34.417 CC lib/util/base64.o 00:03:34.417 CC lib/util/bit_array.o 00:03:34.417 CC lib/ioat/ioat.o 00:03:34.417 CC lib/util/cpuset.o 00:03:34.417 CC lib/util/crc16.o 00:03:34.417 CC lib/util/crc32.o 00:03:34.417 CC lib/dma/dma.o 00:03:34.417 CC lib/util/crc32c.o 00:03:34.417 CC lib/vfio_user/host/vfio_user_pci.o 00:03:34.417 CC lib/util/crc32_ieee.o 00:03:34.417 CC lib/vfio_user/host/vfio_user.o 00:03:34.417 CC lib/util/crc64.o 00:03:34.417 CC lib/util/dif.o 00:03:34.417 CC lib/util/fd.o 00:03:34.417 LIB libspdk_dma.a 00:03:34.417 CC lib/util/fd_group.o 00:03:34.417 SO libspdk_dma.so.5.0 00:03:34.417 CC lib/util/file.o 00:03:34.417 LIB libspdk_ioat.a 00:03:34.417 CC lib/util/hexlify.o 00:03:34.417 SYMLINK libspdk_dma.so 00:03:34.417 CC lib/util/iov.o 00:03:34.417 SO libspdk_ioat.so.7.0 00:03:34.417 CC lib/util/math.o 00:03:34.417 LIB libspdk_vfio_user.a 00:03:34.417 CC lib/util/net.o 00:03:34.417 SYMLINK libspdk_ioat.so 00:03:34.417 CC lib/util/pipe.o 00:03:34.417 SO libspdk_vfio_user.so.5.0 00:03:34.417 CC lib/util/strerror_tls.o 00:03:34.418 SYMLINK libspdk_vfio_user.so 00:03:34.418 CC lib/util/string.o 00:03:34.418 CC lib/util/uuid.o 00:03:34.418 CC lib/util/xor.o 00:03:34.418 CC lib/util/zipf.o 00:03:34.418 CC lib/util/md5.o 00:03:34.418 LIB libspdk_util.a 00:03:34.418 SO libspdk_util.so.10.1 00:03:34.418 LIB libspdk_trace_parser.a 00:03:34.418 SO libspdk_trace_parser.so.6.0 00:03:34.418 SYMLINK libspdk_util.so 00:03:34.418 SYMLINK libspdk_trace_parser.so 00:03:34.418 CC lib/conf/conf.o 00:03:34.418 CC lib/json/json_parse.o 00:03:34.418 CC lib/json/json_util.o 00:03:34.418 CC lib/env_dpdk/env.o 00:03:34.418 CC lib/env_dpdk/memory.o 00:03:34.418 CC lib/vmd/vmd.o 00:03:34.418 CC lib/env_dpdk/pci.o 00:03:34.418 CC lib/json/json_write.o 00:03:34.418 CC lib/idxd/idxd.o 00:03:34.418 CC lib/rdma_utils/rdma_utils.o 00:03:34.418 LIB libspdk_conf.a 00:03:34.418 CC lib/vmd/led.o 00:03:34.418 CC lib/env_dpdk/init.o 00:03:34.418 SO libspdk_conf.so.6.0 00:03:34.418 LIB libspdk_rdma_utils.a 00:03:34.418 LIB libspdk_json.a 00:03:34.418 SYMLINK libspdk_conf.so 00:03:34.418 CC lib/idxd/idxd_user.o 00:03:34.418 SO libspdk_rdma_utils.so.1.0 00:03:34.418 SO libspdk_json.so.6.0 00:03:34.418 CC lib/env_dpdk/threads.o 00:03:34.418 SYMLINK libspdk_rdma_utils.so 00:03:34.418 CC lib/env_dpdk/pci_ioat.o 00:03:34.418 CC lib/idxd/idxd_kernel.o 00:03:34.418 SYMLINK libspdk_json.so 00:03:34.418 CC lib/env_dpdk/pci_virtio.o 00:03:34.418 CC lib/env_dpdk/pci_vmd.o 00:03:34.418 CC lib/env_dpdk/pci_idxd.o 00:03:34.418 CC lib/rdma_provider/common.o 00:03:34.418 LIB libspdk_idxd.a 00:03:34.418 CC lib/env_dpdk/pci_event.o 00:03:34.418 LIB libspdk_vmd.a 00:03:34.418 SO libspdk_idxd.so.12.1 00:03:34.418 SO libspdk_vmd.so.6.0 00:03:34.418 CC lib/env_dpdk/sigbus_handler.o 00:03:34.418 CC lib/env_dpdk/pci_dpdk.o 00:03:34.418 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.418 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.418 SYMLINK libspdk_idxd.so 00:03:34.418 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:34.418 SYMLINK libspdk_vmd.so 00:03:34.418 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.418 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:34.418 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:34.418 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.418 LIB libspdk_rdma_provider.a 00:03:34.418 LIB libspdk_jsonrpc.a 00:03:34.418 SO libspdk_rdma_provider.so.7.0 00:03:34.418 SO libspdk_jsonrpc.so.6.0 00:03:34.418 SYMLINK libspdk_rdma_provider.so 00:03:34.418 SYMLINK libspdk_jsonrpc.so 00:03:34.418 CC lib/rpc/rpc.o 00:03:34.418 LIB libspdk_env_dpdk.a 00:03:34.418 SO libspdk_env_dpdk.so.15.1 00:03:34.418 LIB libspdk_rpc.a 00:03:34.418 SYMLINK libspdk_env_dpdk.so 00:03:34.418 SO libspdk_rpc.so.6.0 00:03:34.418 SYMLINK libspdk_rpc.so 00:03:34.678 CC lib/notify/notify.o 00:03:34.678 CC lib/notify/notify_rpc.o 00:03:34.678 CC lib/trace/trace.o 00:03:34.678 CC lib/trace/trace_rpc.o 00:03:34.678 CC lib/keyring/keyring_rpc.o 00:03:34.678 CC lib/keyring/keyring.o 00:03:34.678 CC lib/trace/trace_flags.o 00:03:34.938 LIB libspdk_notify.a 00:03:34.938 SO libspdk_notify.so.6.0 00:03:34.938 SYMLINK libspdk_notify.so 00:03:34.938 LIB libspdk_trace.a 00:03:35.197 LIB libspdk_keyring.a 00:03:35.197 SO libspdk_trace.so.11.0 00:03:35.197 SO libspdk_keyring.so.2.0 00:03:35.197 SYMLINK libspdk_trace.so 00:03:35.197 SYMLINK libspdk_keyring.so 00:03:35.457 CC lib/sock/sock.o 00:03:35.457 CC lib/sock/sock_rpc.o 00:03:35.457 CC lib/thread/thread.o 00:03:35.457 CC lib/thread/iobuf.o 00:03:36.026 LIB libspdk_sock.a 00:03:36.026 SO libspdk_sock.so.10.0 00:03:36.026 SYMLINK libspdk_sock.so 00:03:36.284 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:36.284 CC lib/nvme/nvme_ctrlr.o 00:03:36.284 CC lib/nvme/nvme_fabric.o 00:03:36.284 CC lib/nvme/nvme_ns.o 00:03:36.284 CC lib/nvme/nvme_ns_cmd.o 00:03:36.284 CC lib/nvme/nvme_pcie.o 00:03:36.284 CC lib/nvme/nvme_pcie_common.o 00:03:36.284 CC lib/nvme/nvme_qpair.o 00:03:36.284 CC lib/nvme/nvme.o 00:03:37.221 LIB libspdk_thread.a 00:03:37.221 SO libspdk_thread.so.11.0 00:03:37.221 CC lib/nvme/nvme_quirks.o 00:03:37.221 SYMLINK libspdk_thread.so 00:03:37.221 CC lib/nvme/nvme_transport.o 00:03:37.221 CC lib/nvme/nvme_discovery.o 00:03:37.221 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.221 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.221 CC lib/accel/accel.o 00:03:37.480 CC lib/nvme/nvme_tcp.o 00:03:37.480 CC lib/nvme/nvme_opal.o 00:03:37.480 CC lib/nvme/nvme_io_msg.o 00:03:37.738 CC lib/nvme/nvme_poll_group.o 00:03:37.738 CC lib/nvme/nvme_zns.o 00:03:37.997 CC lib/nvme/nvme_stubs.o 00:03:37.997 CC lib/nvme/nvme_auth.o 00:03:37.997 CC lib/nvme/nvme_cuse.o 00:03:37.997 CC lib/nvme/nvme_rdma.o 00:03:38.256 CC lib/blob/blobstore.o 00:03:38.256 CC lib/accel/accel_rpc.o 00:03:38.515 CC lib/accel/accel_sw.o 00:03:38.515 CC lib/blob/request.o 00:03:38.515 CC lib/init/json_config.o 00:03:38.515 CC lib/init/subsystem.o 00:03:38.515 CC lib/init/subsystem_rpc.o 00:03:38.774 CC lib/init/rpc.o 00:03:38.774 CC lib/blob/zeroes.o 00:03:38.774 LIB libspdk_accel.a 00:03:38.774 CC lib/blob/blob_bs_dev.o 00:03:38.774 SO libspdk_accel.so.16.0 00:03:39.033 LIB libspdk_init.a 00:03:39.033 CC lib/virtio/virtio.o 00:03:39.033 SYMLINK libspdk_accel.so 00:03:39.033 CC lib/virtio/virtio_vhost_user.o 00:03:39.033 CC lib/virtio/virtio_vfio_user.o 00:03:39.033 CC lib/virtio/virtio_pci.o 00:03:39.033 SO libspdk_init.so.6.0 00:03:39.033 CC lib/fsdev/fsdev.o 00:03:39.033 SYMLINK libspdk_init.so 00:03:39.033 CC lib/fsdev/fsdev_io.o 00:03:39.033 CC lib/bdev/bdev.o 00:03:39.033 CC lib/fsdev/fsdev_rpc.o 00:03:39.292 CC lib/event/app.o 00:03:39.292 CC lib/event/reactor.o 00:03:39.292 CC lib/event/log_rpc.o 00:03:39.292 LIB libspdk_virtio.a 00:03:39.292 SO libspdk_virtio.so.7.0 00:03:39.292 CC lib/event/app_rpc.o 00:03:39.292 LIB libspdk_nvme.a 00:03:39.292 SYMLINK libspdk_virtio.so 00:03:39.292 CC lib/event/scheduler_static.o 00:03:39.292 CC lib/bdev/bdev_rpc.o 00:03:39.552 CC lib/bdev/bdev_zone.o 00:03:39.552 CC lib/bdev/part.o 00:03:39.552 SO libspdk_nvme.so.15.0 00:03:39.552 CC lib/bdev/scsi_nvme.o 00:03:39.552 LIB libspdk_fsdev.a 00:03:39.552 LIB libspdk_event.a 00:03:39.552 SO libspdk_fsdev.so.2.0 00:03:39.812 SO libspdk_event.so.14.0 00:03:39.812 SYMLINK libspdk_fsdev.so 00:03:39.812 SYMLINK libspdk_event.so 00:03:39.812 SYMLINK libspdk_nvme.so 00:03:40.071 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:40.639 LIB libspdk_fuse_dispatcher.a 00:03:40.639 SO libspdk_fuse_dispatcher.so.1.0 00:03:40.639 SYMLINK libspdk_fuse_dispatcher.so 00:03:41.576 LIB libspdk_blob.a 00:03:41.576 SO libspdk_blob.so.12.0 00:03:41.576 SYMLINK libspdk_blob.so 00:03:41.834 CC lib/lvol/lvol.o 00:03:41.834 CC lib/blobfs/blobfs.o 00:03:41.834 CC lib/blobfs/tree.o 00:03:41.834 LIB libspdk_bdev.a 00:03:41.834 SO libspdk_bdev.so.17.0 00:03:42.093 SYMLINK libspdk_bdev.so 00:03:42.093 CC lib/nvmf/ctrlr_discovery.o 00:03:42.093 CC lib/nvmf/ctrlr.o 00:03:42.093 CC lib/nvmf/subsystem.o 00:03:42.093 CC lib/nvmf/ctrlr_bdev.o 00:03:42.093 CC lib/ftl/ftl_core.o 00:03:42.093 CC lib/nbd/nbd.o 00:03:42.093 CC lib/scsi/dev.o 00:03:42.093 CC lib/ublk/ublk.o 00:03:42.660 CC lib/scsi/lun.o 00:03:42.660 CC lib/ftl/ftl_init.o 00:03:42.660 LIB libspdk_lvol.a 00:03:42.660 CC lib/nbd/nbd_rpc.o 00:03:42.660 CC lib/nvmf/nvmf.o 00:03:42.660 SO libspdk_lvol.so.11.0 00:03:42.660 LIB libspdk_blobfs.a 00:03:42.660 SO libspdk_blobfs.so.11.0 00:03:42.660 SYMLINK libspdk_lvol.so 00:03:42.660 CC lib/nvmf/nvmf_rpc.o 00:03:42.942 CC lib/scsi/port.o 00:03:42.942 SYMLINK libspdk_blobfs.so 00:03:42.942 CC lib/scsi/scsi.o 00:03:42.942 CC lib/ublk/ublk_rpc.o 00:03:42.942 LIB libspdk_nbd.a 00:03:42.942 CC lib/ftl/ftl_layout.o 00:03:42.942 SO libspdk_nbd.so.7.0 00:03:42.942 CC lib/nvmf/transport.o 00:03:42.942 SYMLINK libspdk_nbd.so 00:03:42.942 CC lib/ftl/ftl_debug.o 00:03:42.942 CC lib/scsi/scsi_bdev.o 00:03:42.942 CC lib/ftl/ftl_io.o 00:03:42.942 LIB libspdk_ublk.a 00:03:42.942 SO libspdk_ublk.so.3.0 00:03:43.200 SYMLINK libspdk_ublk.so 00:03:43.200 CC lib/nvmf/tcp.o 00:03:43.200 CC lib/scsi/scsi_pr.o 00:03:43.200 CC lib/scsi/scsi_rpc.o 00:03:43.200 CC lib/ftl/ftl_sb.o 00:03:43.459 CC lib/scsi/task.o 00:03:43.459 CC lib/nvmf/stubs.o 00:03:43.459 CC lib/ftl/ftl_l2p.o 00:03:43.459 CC lib/ftl/ftl_l2p_flat.o 00:03:43.459 CC lib/ftl/ftl_nv_cache.o 00:03:43.459 CC lib/nvmf/mdns_server.o 00:03:43.727 CC lib/ftl/ftl_band.o 00:03:43.727 LIB libspdk_scsi.a 00:03:43.727 CC lib/ftl/ftl_band_ops.o 00:03:43.727 SO libspdk_scsi.so.9.0 00:03:43.727 CC lib/ftl/ftl_writer.o 00:03:43.727 CC lib/nvmf/rdma.o 00:03:43.727 SYMLINK libspdk_scsi.so 00:03:43.727 CC lib/nvmf/auth.o 00:03:43.986 CC lib/ftl/ftl_rq.o 00:03:43.986 CC lib/ftl/ftl_reloc.o 00:03:43.986 CC lib/ftl/ftl_l2p_cache.o 00:03:43.986 CC lib/ftl/ftl_p2l.o 00:03:43.986 CC lib/iscsi/conn.o 00:03:44.245 CC lib/ftl/ftl_p2l_log.o 00:03:44.245 CC lib/vhost/vhost.o 00:03:44.245 CC lib/vhost/vhost_rpc.o 00:03:44.503 CC lib/ftl/mngt/ftl_mngt.o 00:03:44.503 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:44.503 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:44.504 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:44.504 CC lib/iscsi/init_grp.o 00:03:44.762 CC lib/iscsi/iscsi.o 00:03:44.762 CC lib/iscsi/param.o 00:03:44.762 CC lib/iscsi/portal_grp.o 00:03:44.762 CC lib/iscsi/tgt_node.o 00:03:44.762 CC lib/iscsi/iscsi_subsystem.o 00:03:44.762 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:44.762 CC lib/iscsi/iscsi_rpc.o 00:03:45.020 CC lib/vhost/vhost_scsi.o 00:03:45.020 CC lib/vhost/vhost_blk.o 00:03:45.020 CC lib/iscsi/task.o 00:03:45.020 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:45.020 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:45.278 CC lib/vhost/rte_vhost_user.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:45.279 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:45.537 CC lib/ftl/utils/ftl_conf.o 00:03:45.537 CC lib/ftl/utils/ftl_md.o 00:03:45.537 CC lib/ftl/utils/ftl_mempool.o 00:03:45.537 CC lib/ftl/utils/ftl_bitmap.o 00:03:45.795 CC lib/ftl/utils/ftl_property.o 00:03:45.795 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:45.795 LIB libspdk_nvmf.a 00:03:45.795 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:45.795 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.054 SO libspdk_nvmf.so.20.0 00:03:46.054 LIB libspdk_iscsi.a 00:03:46.054 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.054 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.054 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.054 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.054 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.054 SO libspdk_iscsi.so.8.0 00:03:46.054 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.054 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:46.054 SYMLINK libspdk_nvmf.so 00:03:46.054 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:46.313 SYMLINK libspdk_iscsi.so 00:03:46.313 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:46.313 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:46.313 CC lib/ftl/base/ftl_base_dev.o 00:03:46.313 CC lib/ftl/base/ftl_base_bdev.o 00:03:46.313 LIB libspdk_vhost.a 00:03:46.313 CC lib/ftl/ftl_trace.o 00:03:46.313 SO libspdk_vhost.so.8.0 00:03:46.313 SYMLINK libspdk_vhost.so 00:03:46.574 LIB libspdk_ftl.a 00:03:46.833 SO libspdk_ftl.so.9.0 00:03:47.091 SYMLINK libspdk_ftl.so 00:03:47.350 CC module/env_dpdk/env_dpdk_rpc.o 00:03:47.350 CC module/accel/iaa/accel_iaa.o 00:03:47.350 CC module/accel/ioat/accel_ioat.o 00:03:47.350 CC module/sock/posix/posix.o 00:03:47.350 CC module/accel/error/accel_error.o 00:03:47.350 CC module/accel/dsa/accel_dsa.o 00:03:47.350 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:47.350 CC module/fsdev/aio/fsdev_aio.o 00:03:47.350 CC module/keyring/file/keyring.o 00:03:47.350 CC module/blob/bdev/blob_bdev.o 00:03:47.350 LIB libspdk_env_dpdk_rpc.a 00:03:47.350 SO libspdk_env_dpdk_rpc.so.6.0 00:03:47.608 SYMLINK libspdk_env_dpdk_rpc.so 00:03:47.608 CC module/accel/ioat/accel_ioat_rpc.o 00:03:47.608 CC module/keyring/file/keyring_rpc.o 00:03:47.608 CC module/accel/iaa/accel_iaa_rpc.o 00:03:47.608 CC module/accel/dsa/accel_dsa_rpc.o 00:03:47.608 CC module/accel/error/accel_error_rpc.o 00:03:47.608 LIB libspdk_scheduler_dynamic.a 00:03:47.608 SO libspdk_scheduler_dynamic.so.4.0 00:03:47.608 LIB libspdk_blob_bdev.a 00:03:47.608 LIB libspdk_accel_ioat.a 00:03:47.608 SYMLINK libspdk_scheduler_dynamic.so 00:03:47.608 SO libspdk_blob_bdev.so.12.0 00:03:47.608 LIB libspdk_keyring_file.a 00:03:47.608 SO libspdk_accel_ioat.so.6.0 00:03:47.608 LIB libspdk_accel_iaa.a 00:03:47.608 LIB libspdk_accel_dsa.a 00:03:47.608 SO libspdk_keyring_file.so.2.0 00:03:47.608 LIB libspdk_accel_error.a 00:03:47.867 SO libspdk_accel_iaa.so.3.0 00:03:47.867 SYMLINK libspdk_blob_bdev.so 00:03:47.867 SO libspdk_accel_dsa.so.5.0 00:03:47.867 SYMLINK libspdk_accel_ioat.so 00:03:47.867 SO libspdk_accel_error.so.2.0 00:03:47.867 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:47.867 SYMLINK libspdk_keyring_file.so 00:03:47.867 CC module/fsdev/aio/linux_aio_mgr.o 00:03:47.867 SYMLINK libspdk_accel_iaa.so 00:03:47.867 SYMLINK libspdk_accel_dsa.so 00:03:47.867 SYMLINK libspdk_accel_error.so 00:03:47.867 CC module/sock/uring/uring.o 00:03:47.867 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:47.867 CC module/keyring/linux/keyring.o 00:03:47.867 CC module/scheduler/gscheduler/gscheduler.o 00:03:47.867 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.126 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.126 LIB libspdk_fsdev_aio.a 00:03:48.126 SO libspdk_fsdev_aio.so.1.0 00:03:48.126 CC module/bdev/delay/vbdev_delay.o 00:03:48.126 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.126 CC module/keyring/linux/keyring_rpc.o 00:03:48.126 LIB libspdk_sock_posix.a 00:03:48.126 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:48.126 CC module/blobfs/bdev/blobfs_bdev.o 00:03:48.126 CC module/bdev/error/vbdev_error.o 00:03:48.126 SO libspdk_sock_posix.so.6.0 00:03:48.126 SYMLINK libspdk_fsdev_aio.so 00:03:48.126 LIB libspdk_scheduler_gscheduler.a 00:03:48.126 CC module/bdev/error/vbdev_error_rpc.o 00:03:48.126 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.126 CC module/bdev/gpt/gpt.o 00:03:48.126 SYMLINK libspdk_sock_posix.so 00:03:48.126 CC module/bdev/gpt/vbdev_gpt.o 00:03:48.385 LIB libspdk_keyring_linux.a 00:03:48.385 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.385 SO libspdk_keyring_linux.so.1.0 00:03:48.385 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:48.385 SYMLINK libspdk_keyring_linux.so 00:03:48.385 LIB libspdk_bdev_error.a 00:03:48.385 SO libspdk_bdev_error.so.6.0 00:03:48.385 CC module/bdev/lvol/vbdev_lvol.o 00:03:48.385 LIB libspdk_bdev_delay.a 00:03:48.385 SYMLINK libspdk_bdev_error.so 00:03:48.385 SO libspdk_bdev_delay.so.6.0 00:03:48.385 LIB libspdk_blobfs_bdev.a 00:03:48.651 CC module/bdev/null/bdev_null.o 00:03:48.651 LIB libspdk_bdev_gpt.a 00:03:48.651 CC module/bdev/malloc/bdev_malloc.o 00:03:48.651 LIB libspdk_sock_uring.a 00:03:48.651 CC module/bdev/nvme/bdev_nvme.o 00:03:48.651 SO libspdk_blobfs_bdev.so.6.0 00:03:48.651 SO libspdk_bdev_gpt.so.6.0 00:03:48.651 SO libspdk_sock_uring.so.5.0 00:03:48.651 SYMLINK libspdk_bdev_delay.so 00:03:48.651 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:48.651 SYMLINK libspdk_blobfs_bdev.so 00:03:48.651 CC module/bdev/passthru/vbdev_passthru.o 00:03:48.651 SYMLINK libspdk_bdev_gpt.so 00:03:48.651 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:48.651 SYMLINK libspdk_sock_uring.so 00:03:48.651 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:48.651 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:48.651 CC module/bdev/raid/bdev_raid.o 00:03:48.910 CC module/bdev/null/bdev_null_rpc.o 00:03:48.910 CC module/bdev/raid/bdev_raid_rpc.o 00:03:48.910 CC module/bdev/raid/bdev_raid_sb.o 00:03:48.910 LIB libspdk_bdev_malloc.a 00:03:48.910 LIB libspdk_bdev_passthru.a 00:03:48.910 SO libspdk_bdev_malloc.so.6.0 00:03:48.910 SO libspdk_bdev_passthru.so.6.0 00:03:48.910 CC module/bdev/raid/raid0.o 00:03:48.910 LIB libspdk_bdev_null.a 00:03:48.910 SYMLINK libspdk_bdev_malloc.so 00:03:48.910 LIB libspdk_bdev_lvol.a 00:03:48.910 SYMLINK libspdk_bdev_passthru.so 00:03:48.910 SO libspdk_bdev_null.so.6.0 00:03:49.170 SO libspdk_bdev_lvol.so.6.0 00:03:49.170 CC module/bdev/nvme/nvme_rpc.o 00:03:49.170 SYMLINK libspdk_bdev_null.so 00:03:49.170 SYMLINK libspdk_bdev_lvol.so 00:03:49.170 CC module/bdev/split/vbdev_split.o 00:03:49.170 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:49.170 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.170 CC module/bdev/nvme/vbdev_opal.o 00:03:49.170 CC module/bdev/uring/bdev_uring.o 00:03:49.170 CC module/bdev/aio/bdev_aio.o 00:03:49.170 CC module/bdev/ftl/bdev_ftl.o 00:03:49.430 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:49.430 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.430 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:49.430 CC module/bdev/uring/bdev_uring_rpc.o 00:03:49.430 CC module/bdev/aio/bdev_aio_rpc.o 00:03:49.689 CC module/bdev/iscsi/bdev_iscsi.o 00:03:49.689 LIB libspdk_bdev_split.a 00:03:49.689 LIB libspdk_bdev_ftl.a 00:03:49.689 SO libspdk_bdev_split.so.6.0 00:03:49.689 SO libspdk_bdev_ftl.so.6.0 00:03:49.689 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:49.689 CC module/bdev/raid/raid1.o 00:03:49.689 SYMLINK libspdk_bdev_split.so 00:03:49.689 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.689 LIB libspdk_bdev_zone_block.a 00:03:49.689 LIB libspdk_bdev_uring.a 00:03:49.689 SYMLINK libspdk_bdev_ftl.so 00:03:49.689 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.689 LIB libspdk_bdev_aio.a 00:03:49.689 SO libspdk_bdev_uring.so.6.0 00:03:49.689 SO libspdk_bdev_zone_block.so.6.0 00:03:49.689 SO libspdk_bdev_aio.so.6.0 00:03:49.689 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:49.689 SYMLINK libspdk_bdev_uring.so 00:03:49.689 SYMLINK libspdk_bdev_zone_block.so 00:03:49.689 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:49.689 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:49.689 CC module/bdev/raid/concat.o 00:03:49.948 SYMLINK libspdk_bdev_aio.so 00:03:49.948 LIB libspdk_bdev_iscsi.a 00:03:49.948 SO libspdk_bdev_iscsi.so.6.0 00:03:49.948 LIB libspdk_bdev_raid.a 00:03:49.948 SYMLINK libspdk_bdev_iscsi.so 00:03:50.207 SO libspdk_bdev_raid.so.6.0 00:03:50.207 SYMLINK libspdk_bdev_raid.so 00:03:50.207 LIB libspdk_bdev_virtio.a 00:03:50.467 SO libspdk_bdev_virtio.so.6.0 00:03:50.467 SYMLINK libspdk_bdev_virtio.so 00:03:51.034 LIB libspdk_bdev_nvme.a 00:03:51.294 SO libspdk_bdev_nvme.so.7.1 00:03:51.294 SYMLINK libspdk_bdev_nvme.so 00:03:51.862 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:51.862 CC module/event/subsystems/sock/sock.o 00:03:51.862 CC module/event/subsystems/keyring/keyring.o 00:03:51.862 CC module/event/subsystems/vmd/vmd.o 00:03:51.862 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:51.862 CC module/event/subsystems/iobuf/iobuf.o 00:03:51.862 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:51.862 CC module/event/subsystems/fsdev/fsdev.o 00:03:51.862 CC module/event/subsystems/scheduler/scheduler.o 00:03:51.862 LIB libspdk_event_keyring.a 00:03:51.862 SO libspdk_event_keyring.so.1.0 00:03:51.862 LIB libspdk_event_fsdev.a 00:03:51.862 LIB libspdk_event_sock.a 00:03:51.862 LIB libspdk_event_vmd.a 00:03:51.862 LIB libspdk_event_vhost_blk.a 00:03:51.862 LIB libspdk_event_scheduler.a 00:03:51.862 LIB libspdk_event_iobuf.a 00:03:51.862 SO libspdk_event_sock.so.5.0 00:03:51.862 SO libspdk_event_fsdev.so.1.0 00:03:52.121 SO libspdk_event_vmd.so.6.0 00:03:52.121 SO libspdk_event_vhost_blk.so.3.0 00:03:52.121 SO libspdk_event_scheduler.so.4.0 00:03:52.121 SYMLINK libspdk_event_keyring.so 00:03:52.121 SO libspdk_event_iobuf.so.3.0 00:03:52.121 SYMLINK libspdk_event_sock.so 00:03:52.121 SYMLINK libspdk_event_fsdev.so 00:03:52.121 SYMLINK libspdk_event_vhost_blk.so 00:03:52.121 SYMLINK libspdk_event_vmd.so 00:03:52.121 SYMLINK libspdk_event_scheduler.so 00:03:52.121 SYMLINK libspdk_event_iobuf.so 00:03:52.380 CC module/event/subsystems/accel/accel.o 00:03:52.639 LIB libspdk_event_accel.a 00:03:52.639 SO libspdk_event_accel.so.6.0 00:03:52.639 SYMLINK libspdk_event_accel.so 00:03:52.899 CC module/event/subsystems/bdev/bdev.o 00:03:53.158 LIB libspdk_event_bdev.a 00:03:53.158 SO libspdk_event_bdev.so.6.0 00:03:53.158 SYMLINK libspdk_event_bdev.so 00:03:53.417 CC module/event/subsystems/scsi/scsi.o 00:03:53.417 CC module/event/subsystems/ublk/ublk.o 00:03:53.417 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:53.417 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:53.417 CC module/event/subsystems/nbd/nbd.o 00:03:53.677 LIB libspdk_event_nbd.a 00:03:53.677 LIB libspdk_event_ublk.a 00:03:53.677 LIB libspdk_event_scsi.a 00:03:53.677 SO libspdk_event_ublk.so.3.0 00:03:53.677 SO libspdk_event_scsi.so.6.0 00:03:53.677 SO libspdk_event_nbd.so.6.0 00:03:53.677 SYMLINK libspdk_event_ublk.so 00:03:53.677 SYMLINK libspdk_event_scsi.so 00:03:53.677 SYMLINK libspdk_event_nbd.so 00:03:53.677 LIB libspdk_event_nvmf.a 00:03:53.936 SO libspdk_event_nvmf.so.6.0 00:03:53.936 SYMLINK libspdk_event_nvmf.so 00:03:53.936 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.936 CC module/event/subsystems/iscsi/iscsi.o 00:03:54.195 LIB libspdk_event_vhost_scsi.a 00:03:54.195 LIB libspdk_event_iscsi.a 00:03:54.195 SO libspdk_event_vhost_scsi.so.3.0 00:03:54.195 SO libspdk_event_iscsi.so.6.0 00:03:54.454 SYMLINK libspdk_event_vhost_scsi.so 00:03:54.454 SYMLINK libspdk_event_iscsi.so 00:03:54.454 SO libspdk.so.6.0 00:03:54.454 SYMLINK libspdk.so 00:03:54.713 CC app/trace_record/trace_record.o 00:03:54.713 CC app/spdk_nvme_perf/perf.o 00:03:54.713 CC app/spdk_nvme_identify/identify.o 00:03:54.713 CXX app/trace/trace.o 00:03:54.713 CC app/spdk_lspci/spdk_lspci.o 00:03:54.713 CC app/nvmf_tgt/nvmf_main.o 00:03:54.972 CC app/iscsi_tgt/iscsi_tgt.o 00:03:54.972 CC app/spdk_tgt/spdk_tgt.o 00:03:54.972 CC examples/util/zipf/zipf.o 00:03:54.972 CC test/thread/poller_perf/poller_perf.o 00:03:54.972 LINK spdk_lspci 00:03:54.972 LINK spdk_trace_record 00:03:54.972 LINK nvmf_tgt 00:03:54.972 LINK poller_perf 00:03:55.231 LINK zipf 00:03:55.231 LINK iscsi_tgt 00:03:55.231 LINK spdk_tgt 00:03:55.231 CC app/spdk_nvme_discover/discovery_aer.o 00:03:55.231 LINK spdk_trace 00:03:55.491 LINK spdk_nvme_discover 00:03:55.491 CC app/spdk_top/spdk_top.o 00:03:55.491 CC test/dma/test_dma/test_dma.o 00:03:55.491 TEST_HEADER include/spdk/accel.h 00:03:55.491 CC examples/ioat/perf/perf.o 00:03:55.491 TEST_HEADER include/spdk/accel_module.h 00:03:55.491 TEST_HEADER include/spdk/assert.h 00:03:55.491 TEST_HEADER include/spdk/barrier.h 00:03:55.491 TEST_HEADER include/spdk/base64.h 00:03:55.491 TEST_HEADER include/spdk/bdev.h 00:03:55.491 TEST_HEADER include/spdk/bdev_module.h 00:03:55.491 TEST_HEADER include/spdk/bdev_zone.h 00:03:55.491 TEST_HEADER include/spdk/bit_array.h 00:03:55.491 TEST_HEADER include/spdk/bit_pool.h 00:03:55.491 TEST_HEADER include/spdk/blob_bdev.h 00:03:55.491 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:55.491 TEST_HEADER include/spdk/blobfs.h 00:03:55.491 TEST_HEADER include/spdk/blob.h 00:03:55.491 CC examples/vmd/lsvmd/lsvmd.o 00:03:55.491 TEST_HEADER include/spdk/conf.h 00:03:55.491 TEST_HEADER include/spdk/config.h 00:03:55.491 TEST_HEADER include/spdk/cpuset.h 00:03:55.491 CC test/app/bdev_svc/bdev_svc.o 00:03:55.491 TEST_HEADER include/spdk/crc16.h 00:03:55.491 TEST_HEADER include/spdk/crc32.h 00:03:55.491 TEST_HEADER include/spdk/crc64.h 00:03:55.491 TEST_HEADER include/spdk/dif.h 00:03:55.491 TEST_HEADER include/spdk/dma.h 00:03:55.491 TEST_HEADER include/spdk/endian.h 00:03:55.491 TEST_HEADER include/spdk/env_dpdk.h 00:03:55.491 TEST_HEADER include/spdk/env.h 00:03:55.491 TEST_HEADER include/spdk/event.h 00:03:55.491 TEST_HEADER include/spdk/fd_group.h 00:03:55.491 TEST_HEADER include/spdk/fd.h 00:03:55.491 TEST_HEADER include/spdk/file.h 00:03:55.491 TEST_HEADER include/spdk/fsdev.h 00:03:55.491 TEST_HEADER include/spdk/fsdev_module.h 00:03:55.491 TEST_HEADER include/spdk/ftl.h 00:03:55.491 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:55.491 TEST_HEADER include/spdk/gpt_spec.h 00:03:55.491 TEST_HEADER include/spdk/hexlify.h 00:03:55.491 TEST_HEADER include/spdk/histogram_data.h 00:03:55.491 TEST_HEADER include/spdk/idxd.h 00:03:55.491 TEST_HEADER include/spdk/idxd_spec.h 00:03:55.491 TEST_HEADER include/spdk/init.h 00:03:55.491 TEST_HEADER include/spdk/ioat.h 00:03:55.491 TEST_HEADER include/spdk/ioat_spec.h 00:03:55.491 TEST_HEADER include/spdk/iscsi_spec.h 00:03:55.491 TEST_HEADER include/spdk/json.h 00:03:55.491 TEST_HEADER include/spdk/jsonrpc.h 00:03:55.491 TEST_HEADER include/spdk/keyring.h 00:03:55.491 TEST_HEADER include/spdk/keyring_module.h 00:03:55.491 TEST_HEADER include/spdk/likely.h 00:03:55.491 TEST_HEADER include/spdk/log.h 00:03:55.491 TEST_HEADER include/spdk/lvol.h 00:03:55.491 TEST_HEADER include/spdk/md5.h 00:03:55.491 TEST_HEADER include/spdk/memory.h 00:03:55.491 TEST_HEADER include/spdk/mmio.h 00:03:55.491 TEST_HEADER include/spdk/nbd.h 00:03:55.491 TEST_HEADER include/spdk/net.h 00:03:55.491 TEST_HEADER include/spdk/notify.h 00:03:55.491 TEST_HEADER include/spdk/nvme.h 00:03:55.491 TEST_HEADER include/spdk/nvme_intel.h 00:03:55.491 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:55.491 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:55.491 TEST_HEADER include/spdk/nvme_spec.h 00:03:55.491 TEST_HEADER include/spdk/nvme_zns.h 00:03:55.491 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:55.491 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:55.491 TEST_HEADER include/spdk/nvmf.h 00:03:55.491 TEST_HEADER include/spdk/nvmf_spec.h 00:03:55.491 TEST_HEADER include/spdk/nvmf_transport.h 00:03:55.491 TEST_HEADER include/spdk/opal.h 00:03:55.491 TEST_HEADER include/spdk/opal_spec.h 00:03:55.491 TEST_HEADER include/spdk/pci_ids.h 00:03:55.491 TEST_HEADER include/spdk/pipe.h 00:03:55.491 TEST_HEADER include/spdk/queue.h 00:03:55.491 TEST_HEADER include/spdk/reduce.h 00:03:55.491 TEST_HEADER include/spdk/rpc.h 00:03:55.491 TEST_HEADER include/spdk/scheduler.h 00:03:55.491 TEST_HEADER include/spdk/scsi.h 00:03:55.491 TEST_HEADER include/spdk/scsi_spec.h 00:03:55.491 TEST_HEADER include/spdk/sock.h 00:03:55.491 TEST_HEADER include/spdk/stdinc.h 00:03:55.491 TEST_HEADER include/spdk/string.h 00:03:55.491 TEST_HEADER include/spdk/thread.h 00:03:55.491 TEST_HEADER include/spdk/trace.h 00:03:55.491 TEST_HEADER include/spdk/trace_parser.h 00:03:55.751 TEST_HEADER include/spdk/tree.h 00:03:55.751 TEST_HEADER include/spdk/ublk.h 00:03:55.751 TEST_HEADER include/spdk/util.h 00:03:55.751 TEST_HEADER include/spdk/uuid.h 00:03:55.751 TEST_HEADER include/spdk/version.h 00:03:55.751 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:55.751 LINK lsvmd 00:03:55.751 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:55.751 TEST_HEADER include/spdk/vhost.h 00:03:55.751 TEST_HEADER include/spdk/vmd.h 00:03:55.751 TEST_HEADER include/spdk/xor.h 00:03:55.751 TEST_HEADER include/spdk/zipf.h 00:03:55.751 CXX test/cpp_headers/accel.o 00:03:55.751 LINK spdk_nvme_identify 00:03:55.751 LINK bdev_svc 00:03:55.751 LINK ioat_perf 00:03:55.751 CC test/env/mem_callbacks/mem_callbacks.o 00:03:55.751 LINK spdk_nvme_perf 00:03:55.751 CXX test/cpp_headers/accel_module.o 00:03:55.751 CC test/event/event_perf/event_perf.o 00:03:56.010 CXX test/cpp_headers/assert.o 00:03:56.010 CC test/event/reactor/reactor.o 00:03:56.010 CC examples/vmd/led/led.o 00:03:56.010 CC examples/ioat/verify/verify.o 00:03:56.010 LINK test_dma 00:03:56.010 LINK event_perf 00:03:56.010 LINK reactor 00:03:56.010 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:56.010 CXX test/cpp_headers/barrier.o 00:03:56.010 LINK led 00:03:56.269 CC test/rpc_client/rpc_client_test.o 00:03:56.269 LINK verify 00:03:56.269 CXX test/cpp_headers/base64.o 00:03:56.269 CXX test/cpp_headers/bdev.o 00:03:56.269 LINK spdk_top 00:03:56.269 CC test/event/reactor_perf/reactor_perf.o 00:03:56.269 LINK mem_callbacks 00:03:56.269 LINK rpc_client_test 00:03:56.269 CC test/event/app_repeat/app_repeat.o 00:03:56.269 CC test/event/scheduler/scheduler.o 00:03:56.269 CXX test/cpp_headers/bdev_module.o 00:03:56.528 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:56.528 LINK reactor_perf 00:03:56.528 CC examples/idxd/perf/perf.o 00:03:56.528 LINK nvme_fuzz 00:03:56.528 LINK app_repeat 00:03:56.528 CC test/env/vtophys/vtophys.o 00:03:56.528 CC app/spdk_dd/spdk_dd.o 00:03:56.528 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:56.528 CXX test/cpp_headers/bdev_zone.o 00:03:56.528 CXX test/cpp_headers/bit_array.o 00:03:56.528 LINK scheduler 00:03:56.787 LINK interrupt_tgt 00:03:56.787 LINK vtophys 00:03:56.787 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:56.787 LINK env_dpdk_post_init 00:03:56.787 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.787 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.787 CXX test/cpp_headers/bit_pool.o 00:03:56.787 CXX test/cpp_headers/blob_bdev.o 00:03:56.787 LINK idxd_perf 00:03:56.787 CXX test/cpp_headers/blobfs_bdev.o 00:03:57.044 CXX test/cpp_headers/blobfs.o 00:03:57.044 CC test/env/memory/memory_ut.o 00:03:57.044 LINK spdk_dd 00:03:57.044 CC test/env/pci/pci_ut.o 00:03:57.044 CXX test/cpp_headers/blob.o 00:03:57.044 CC app/fio/nvme/fio_plugin.o 00:03:57.044 CC app/fio/bdev/fio_plugin.o 00:03:57.302 CC examples/thread/thread/thread_ex.o 00:03:57.302 LINK vhost_fuzz 00:03:57.302 CXX test/cpp_headers/conf.o 00:03:57.302 CXX test/cpp_headers/config.o 00:03:57.302 CC app/vhost/vhost.o 00:03:57.302 CXX test/cpp_headers/cpuset.o 00:03:57.560 LINK pci_ut 00:03:57.560 LINK thread 00:03:57.560 CC test/accel/dif/dif.o 00:03:57.560 CXX test/cpp_headers/crc16.o 00:03:57.560 LINK vhost 00:03:57.560 CC examples/sock/hello_world/hello_sock.o 00:03:57.560 CXX test/cpp_headers/crc32.o 00:03:57.560 LINK spdk_bdev 00:03:57.560 LINK spdk_nvme 00:03:57.819 CXX test/cpp_headers/crc64.o 00:03:57.819 LINK hello_sock 00:03:57.819 CC test/app/histogram_perf/histogram_perf.o 00:03:57.819 CXX test/cpp_headers/dif.o 00:03:57.819 CC test/nvme/aer/aer.o 00:03:58.078 CC test/blobfs/mkfs/mkfs.o 00:03:58.078 CXX test/cpp_headers/dma.o 00:03:58.078 CC examples/accel/perf/accel_perf.o 00:03:58.078 CC test/lvol/esnap/esnap.o 00:03:58.078 LINK histogram_perf 00:03:58.078 LINK dif 00:03:58.078 CC test/app/jsoncat/jsoncat.o 00:03:58.078 CXX test/cpp_headers/endian.o 00:03:58.078 LINK mkfs 00:03:58.337 LINK aer 00:03:58.337 LINK memory_ut 00:03:58.337 CC test/app/stub/stub.o 00:03:58.337 LINK jsoncat 00:03:58.337 CXX test/cpp_headers/env_dpdk.o 00:03:58.337 LINK iscsi_fuzz 00:03:58.595 CC test/nvme/reset/reset.o 00:03:58.595 LINK stub 00:03:58.595 LINK accel_perf 00:03:58.595 CXX test/cpp_headers/env.o 00:03:58.595 CC test/nvme/sgl/sgl.o 00:03:58.595 CC examples/blob/cli/blobcli.o 00:03:58.595 CC examples/blob/hello_world/hello_blob.o 00:03:58.595 CC test/bdev/bdevio/bdevio.o 00:03:58.595 CXX test/cpp_headers/event.o 00:03:58.863 CC test/nvme/overhead/overhead.o 00:03:58.863 CC test/nvme/e2edp/nvme_dp.o 00:03:58.863 LINK reset 00:03:58.863 CC test/nvme/err_injection/err_injection.o 00:03:58.863 LINK sgl 00:03:58.863 LINK hello_blob 00:03:58.863 CXX test/cpp_headers/fd_group.o 00:03:59.134 CC test/nvme/startup/startup.o 00:03:59.134 LINK err_injection 00:03:59.134 LINK nvme_dp 00:03:59.134 LINK bdevio 00:03:59.134 LINK overhead 00:03:59.134 CC test/nvme/reserve/reserve.o 00:03:59.134 LINK blobcli 00:03:59.134 CXX test/cpp_headers/fd.o 00:03:59.134 CC test/nvme/simple_copy/simple_copy.o 00:03:59.134 LINK startup 00:03:59.393 CXX test/cpp_headers/file.o 00:03:59.393 CC test/nvme/connect_stress/connect_stress.o 00:03:59.393 CC test/nvme/boot_partition/boot_partition.o 00:03:59.393 CC test/nvme/compliance/nvme_compliance.o 00:03:59.393 LINK reserve 00:03:59.393 CC test/nvme/fused_ordering/fused_ordering.o 00:03:59.393 LINK simple_copy 00:03:59.393 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:59.393 CXX test/cpp_headers/fsdev.o 00:03:59.393 CC examples/nvme/hello_world/hello_world.o 00:03:59.393 LINK connect_stress 00:03:59.393 LINK boot_partition 00:03:59.652 LINK fused_ordering 00:03:59.652 CC examples/nvme/reconnect/reconnect.o 00:03:59.652 LINK nvme_compliance 00:03:59.652 CXX test/cpp_headers/fsdev_module.o 00:03:59.652 CC test/nvme/fdp/fdp.o 00:03:59.652 CXX test/cpp_headers/ftl.o 00:03:59.652 LINK doorbell_aers 00:03:59.652 LINK hello_world 00:03:59.652 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.652 CXX test/cpp_headers/fuse_dispatcher.o 00:03:59.911 CXX test/cpp_headers/gpt_spec.o 00:03:59.911 CXX test/cpp_headers/hexlify.o 00:03:59.911 LINK reconnect 00:03:59.911 CC examples/nvme/hotplug/hotplug.o 00:03:59.911 CC examples/nvme/arbitration/arbitration.o 00:03:59.911 CXX test/cpp_headers/histogram_data.o 00:03:59.911 LINK fdp 00:04:00.170 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:00.170 CXX test/cpp_headers/idxd.o 00:04:00.170 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.170 LINK hotplug 00:04:00.170 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.170 CC test/nvme/cuse/cuse.o 00:04:00.170 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.170 LINK nvme_manage 00:04:00.428 LINK arbitration 00:04:00.428 CXX test/cpp_headers/idxd_spec.o 00:04:00.428 LINK hello_fsdev 00:04:00.428 LINK hello_bdev 00:04:00.428 LINK cmb_copy 00:04:00.428 CC examples/nvme/abort/abort.o 00:04:00.428 CXX test/cpp_headers/init.o 00:04:00.428 CXX test/cpp_headers/ioat.o 00:04:00.687 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.687 CXX test/cpp_headers/ioat_spec.o 00:04:00.687 CXX test/cpp_headers/iscsi_spec.o 00:04:00.687 CXX test/cpp_headers/json.o 00:04:00.687 CXX test/cpp_headers/jsonrpc.o 00:04:00.687 CXX test/cpp_headers/keyring.o 00:04:00.687 CXX test/cpp_headers/keyring_module.o 00:04:00.687 LINK pmr_persistence 00:04:00.687 CXX test/cpp_headers/likely.o 00:04:00.687 CXX test/cpp_headers/log.o 00:04:00.948 LINK abort 00:04:00.948 CXX test/cpp_headers/lvol.o 00:04:00.948 CXX test/cpp_headers/md5.o 00:04:00.948 CXX test/cpp_headers/memory.o 00:04:00.948 CXX test/cpp_headers/mmio.o 00:04:00.948 CXX test/cpp_headers/nbd.o 00:04:00.948 CXX test/cpp_headers/net.o 00:04:00.948 CXX test/cpp_headers/notify.o 00:04:00.948 CXX test/cpp_headers/nvme.o 00:04:00.948 CXX test/cpp_headers/nvme_intel.o 00:04:01.208 CXX test/cpp_headers/nvme_ocssd.o 00:04:01.208 LINK bdevperf 00:04:01.208 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:01.208 CXX test/cpp_headers/nvme_spec.o 00:04:01.208 CXX test/cpp_headers/nvme_zns.o 00:04:01.208 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.208 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.208 CXX test/cpp_headers/nvmf.o 00:04:01.208 CXX test/cpp_headers/nvmf_spec.o 00:04:01.208 CXX test/cpp_headers/nvmf_transport.o 00:04:01.208 CXX test/cpp_headers/opal.o 00:04:01.467 CXX test/cpp_headers/opal_spec.o 00:04:01.467 CXX test/cpp_headers/pci_ids.o 00:04:01.467 CXX test/cpp_headers/pipe.o 00:04:01.467 CXX test/cpp_headers/queue.o 00:04:01.467 CXX test/cpp_headers/reduce.o 00:04:01.467 CXX test/cpp_headers/rpc.o 00:04:01.467 CC examples/nvmf/nvmf/nvmf.o 00:04:01.467 CXX test/cpp_headers/scheduler.o 00:04:01.467 CXX test/cpp_headers/scsi.o 00:04:01.467 CXX test/cpp_headers/scsi_spec.o 00:04:01.467 CXX test/cpp_headers/sock.o 00:04:01.467 CXX test/cpp_headers/stdinc.o 00:04:01.467 LINK cuse 00:04:01.725 CXX test/cpp_headers/string.o 00:04:01.725 CXX test/cpp_headers/thread.o 00:04:01.725 CXX test/cpp_headers/trace.o 00:04:01.725 CXX test/cpp_headers/trace_parser.o 00:04:01.725 CXX test/cpp_headers/tree.o 00:04:01.725 CXX test/cpp_headers/ublk.o 00:04:01.725 CXX test/cpp_headers/util.o 00:04:01.725 CXX test/cpp_headers/uuid.o 00:04:01.725 CXX test/cpp_headers/version.o 00:04:01.725 CXX test/cpp_headers/vfio_user_pci.o 00:04:01.725 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.725 LINK nvmf 00:04:01.725 CXX test/cpp_headers/vhost.o 00:04:01.983 CXX test/cpp_headers/vmd.o 00:04:01.983 CXX test/cpp_headers/xor.o 00:04:01.983 CXX test/cpp_headers/zipf.o 00:04:02.919 LINK esnap 00:04:03.487 00:04:03.487 real 1m26.094s 00:04:03.487 user 8m2.282s 00:04:03.487 sys 1m35.809s 00:04:03.487 19:12:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:03.487 19:12:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:03.487 ************************************ 00:04:03.487 END TEST make 00:04:03.487 ************************************ 00:04:03.487 19:12:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:03.487 19:12:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:03.487 19:12:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:03.487 19:12:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.487 19:12:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:03.487 19:12:01 -- pm/common@44 -- $ pid=5237 00:04:03.487 19:12:01 -- pm/common@50 -- $ kill -TERM 5237 00:04:03.487 19:12:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.487 19:12:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:03.487 19:12:01 -- pm/common@44 -- $ pid=5239 00:04:03.487 19:12:01 -- pm/common@50 -- $ kill -TERM 5239 00:04:03.487 19:12:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:03.487 19:12:01 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:03.487 19:12:01 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.487 19:12:01 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.487 19:12:01 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.487 19:12:01 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.487 19:12:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.487 19:12:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.487 19:12:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.487 19:12:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.488 19:12:01 -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.488 19:12:01 -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.488 19:12:01 -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.488 19:12:01 -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.488 19:12:01 -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.488 19:12:01 -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.488 19:12:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.488 19:12:01 -- scripts/common.sh@344 -- # case "$op" in 00:04:03.488 19:12:01 -- scripts/common.sh@345 -- # : 1 00:04:03.488 19:12:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.488 19:12:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.488 19:12:01 -- scripts/common.sh@365 -- # decimal 1 00:04:03.488 19:12:01 -- scripts/common.sh@353 -- # local d=1 00:04:03.488 19:12:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.488 19:12:01 -- scripts/common.sh@355 -- # echo 1 00:04:03.488 19:12:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.488 19:12:01 -- scripts/common.sh@366 -- # decimal 2 00:04:03.488 19:12:01 -- scripts/common.sh@353 -- # local d=2 00:04:03.488 19:12:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.488 19:12:01 -- scripts/common.sh@355 -- # echo 2 00:04:03.488 19:12:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.488 19:12:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.488 19:12:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.488 19:12:01 -- scripts/common.sh@368 -- # return 0 00:04:03.488 19:12:01 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.488 19:12:01 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.488 --rc genhtml_branch_coverage=1 00:04:03.488 --rc genhtml_function_coverage=1 00:04:03.488 --rc genhtml_legend=1 00:04:03.488 --rc geninfo_all_blocks=1 00:04:03.488 --rc geninfo_unexecuted_blocks=1 00:04:03.488 00:04:03.488 ' 00:04:03.488 19:12:01 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.488 --rc genhtml_branch_coverage=1 00:04:03.488 --rc genhtml_function_coverage=1 00:04:03.488 --rc genhtml_legend=1 00:04:03.488 --rc geninfo_all_blocks=1 00:04:03.488 --rc geninfo_unexecuted_blocks=1 00:04:03.488 00:04:03.488 ' 00:04:03.488 19:12:01 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.488 --rc genhtml_branch_coverage=1 00:04:03.488 --rc genhtml_function_coverage=1 00:04:03.488 --rc genhtml_legend=1 00:04:03.488 --rc geninfo_all_blocks=1 00:04:03.488 --rc geninfo_unexecuted_blocks=1 00:04:03.488 00:04:03.488 ' 00:04:03.488 19:12:01 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.488 --rc genhtml_branch_coverage=1 00:04:03.488 --rc genhtml_function_coverage=1 00:04:03.488 --rc genhtml_legend=1 00:04:03.488 --rc geninfo_all_blocks=1 00:04:03.488 --rc geninfo_unexecuted_blocks=1 00:04:03.488 00:04:03.488 ' 00:04:03.488 19:12:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:03.488 19:12:01 -- nvmf/common.sh@7 -- # uname -s 00:04:03.488 19:12:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.488 19:12:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.488 19:12:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.488 19:12:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.488 19:12:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.488 19:12:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.488 19:12:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.748 19:12:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.748 19:12:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.748 19:12:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.748 19:12:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:04:03.748 19:12:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:04:03.748 19:12:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.748 19:12:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.748 19:12:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:03.748 19:12:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.748 19:12:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:03.748 19:12:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.748 19:12:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.748 19:12:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.748 19:12:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.748 19:12:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.748 19:12:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.748 19:12:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.748 19:12:01 -- paths/export.sh@5 -- # export PATH 00:04:03.748 19:12:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.748 19:12:01 -- nvmf/common.sh@51 -- # : 0 00:04:03.748 19:12:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.748 19:12:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.748 19:12:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.748 19:12:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.748 19:12:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.748 19:12:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.748 19:12:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.748 19:12:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.748 19:12:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.748 19:12:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:03.748 19:12:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:03.748 19:12:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:03.748 19:12:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:03.748 19:12:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:03.748 19:12:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:03.748 19:12:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:03.748 19:12:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:03.748 19:12:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:03.748 19:12:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:03.748 19:12:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54299 00:04:03.748 19:12:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:03.748 19:12:02 -- pm/common@17 -- # local monitor 00:04:03.748 19:12:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.748 19:12:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:03.748 19:12:02 -- pm/common@21 -- # date +%s 00:04:03.748 19:12:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.748 19:12:02 -- pm/common@25 -- # sleep 1 00:04:03.748 19:12:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732648322 00:04:03.748 19:12:02 -- pm/common@21 -- # date +%s 00:04:03.748 19:12:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732648322 00:04:03.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732648322_collect-vmstat.pm.log 00:04:03.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732648322_collect-cpu-load.pm.log 00:04:04.686 19:12:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:04.686 19:12:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:04.686 19:12:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.686 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:04.686 19:12:03 -- spdk/autotest.sh@59 -- # create_test_list 00:04:04.686 19:12:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:04.686 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:04.686 19:12:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:04.686 19:12:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:04.686 19:12:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:04.686 19:12:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:04.686 19:12:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:04.686 19:12:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:04.686 19:12:03 -- common/autotest_common.sh@1457 -- # uname 00:04:04.686 19:12:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:04.686 19:12:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:04.686 19:12:03 -- common/autotest_common.sh@1477 -- # uname 00:04:04.686 19:12:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:04.686 19:12:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:04.686 19:12:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:04.945 lcov: LCOV version 1.15 00:04:04.945 19:12:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:19.827 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:19.827 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.916 19:12:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:37.916 19:12:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.916 19:12:33 -- common/autotest_common.sh@10 -- # set +x 00:04:37.916 19:12:33 -- spdk/autotest.sh@78 -- # rm -f 00:04:37.916 19:12:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.916 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:37.916 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.916 19:12:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:37.916 19:12:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:37.916 19:12:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:37.916 19:12:34 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:37.916 19:12:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:37.916 19:12:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:37.916 19:12:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:37.917 19:12:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:37.917 19:12:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:37.917 19:12:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:37.917 19:12:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:37.917 19:12:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:37.917 19:12:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:37.917 19:12:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:37.917 19:12:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:37.917 19:12:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:37.917 19:12:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.917 19:12:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.917 19:12:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:37.917 19:12:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.917 19:12:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.917 19:12:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:37.917 19:12:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:37.917 19:12:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.917 No valid GPT data, bailing 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # pt= 00:04:37.917 19:12:34 -- scripts/common.sh@395 -- # return 1 00:04:37.917 19:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.917 1+0 records in 00:04:37.917 1+0 records out 00:04:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482977 s, 217 MB/s 00:04:37.917 19:12:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.917 19:12:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.917 19:12:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:37.917 19:12:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:37.917 19:12:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:37.917 No valid GPT data, bailing 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # pt= 00:04:37.917 19:12:34 -- scripts/common.sh@395 -- # return 1 00:04:37.917 19:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:37.917 1+0 records in 00:04:37.917 1+0 records out 00:04:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484805 s, 216 MB/s 00:04:37.917 19:12:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.917 19:12:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.917 19:12:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:37.917 19:12:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:37.917 19:12:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:37.917 No valid GPT data, bailing 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # pt= 00:04:37.917 19:12:34 -- scripts/common.sh@395 -- # return 1 00:04:37.917 19:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:37.917 1+0 records in 00:04:37.917 1+0 records out 00:04:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047089 s, 223 MB/s 00:04:37.917 19:12:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.917 19:12:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.917 19:12:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:37.917 19:12:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:37.917 19:12:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:37.917 No valid GPT data, bailing 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:37.917 19:12:34 -- scripts/common.sh@394 -- # pt= 00:04:37.917 19:12:34 -- scripts/common.sh@395 -- # return 1 00:04:37.917 19:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:37.917 1+0 records in 00:04:37.917 1+0 records out 00:04:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477848 s, 219 MB/s 00:04:37.917 19:12:34 -- spdk/autotest.sh@105 -- # sync 00:04:37.917 19:12:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.917 19:12:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.917 19:12:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:38.853 19:12:36 -- spdk/autotest.sh@111 -- # uname -s 00:04:38.853 19:12:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:38.853 19:12:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:38.853 19:12:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:39.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.420 Hugepages 00:04:39.420 node hugesize free / total 00:04:39.420 node0 1048576kB 0 / 0 00:04:39.420 node0 2048kB 0 / 0 00:04:39.420 00:04:39.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.420 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:39.420 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:39.420 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:39.678 19:12:37 -- spdk/autotest.sh@117 -- # uname -s 00:04:39.678 19:12:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:39.678 19:12:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:39.678 19:12:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.246 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.505 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.505 19:12:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:41.441 19:12:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:41.441 19:12:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:41.441 19:12:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.441 19:12:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:41.441 19:12:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.441 19:12:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.441 19:12:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.441 19:12:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.441 19:12:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.441 19:12:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:41.441 19:12:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:41.441 19:12:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.959 Waiting for block devices as requested 00:04:41.959 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:41.959 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:41.959 19:12:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:41.959 19:12:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:41.959 19:12:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:41.959 19:12:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:41.959 19:12:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:41.959 19:12:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:41.959 19:12:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:41.959 19:12:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:41.959 19:12:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:41.959 19:12:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:42.218 19:12:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:42.218 19:12:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:42.218 19:12:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:42.218 19:12:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:42.218 19:12:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:42.218 19:12:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:42.218 19:12:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:42.218 19:12:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.218 19:12:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:42.219 19:12:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:42.219 19:12:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:42.219 19:12:40 -- common/autotest_common.sh@1543 -- # continue 00:04:42.219 19:12:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:42.219 19:12:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.219 19:12:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:42.219 19:12:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:42.219 19:12:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:42.219 19:12:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:42.219 19:12:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:42.219 19:12:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:42.219 19:12:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:42.219 19:12:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:42.219 19:12:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:42.219 19:12:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:42.219 19:12:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.219 19:12:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:42.219 19:12:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:42.219 19:12:40 -- common/autotest_common.sh@1543 -- # continue 00:04:42.219 19:12:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:42.219 19:12:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.219 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.219 19:12:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:42.219 19:12:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.219 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.219 19:12:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.046 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.046 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.046 19:12:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.046 19:12:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.046 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.046 19:12:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.046 19:12:41 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:43.046 19:12:41 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.046 19:12:41 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:43.046 19:12:41 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:43.046 19:12:41 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:43.046 19:12:41 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.046 19:12:41 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:43.046 19:12:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.046 19:12:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.046 19:12:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.046 19:12:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.046 19:12:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.046 19:12:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:43.046 19:12:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:43.046 19:12:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.046 19:12:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:43.046 19:12:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:43.046 19:12:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.046 19:12:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.046 19:12:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:43.046 19:12:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:43.046 19:12:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.046 19:12:41 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:43.046 19:12:41 -- common/autotest_common.sh@1572 -- # return 0 00:04:43.046 19:12:41 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:43.046 19:12:41 -- common/autotest_common.sh@1580 -- # return 0 00:04:43.046 19:12:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:43.046 19:12:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:43.046 19:12:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:43.046 19:12:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:43.046 19:12:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:43.046 19:12:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.046 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.305 19:12:41 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:43.305 19:12:41 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:43.305 19:12:41 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:43.305 19:12:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.305 19:12:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.305 19:12:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.305 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.305 ************************************ 00:04:43.305 START TEST env 00:04:43.305 ************************************ 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.305 * Looking for test storage... 00:04:43.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.305 19:12:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.305 19:12:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.305 19:12:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.305 19:12:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.305 19:12:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.305 19:12:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.305 19:12:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.305 19:12:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.305 19:12:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.305 19:12:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.305 19:12:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.305 19:12:41 env -- scripts/common.sh@344 -- # case "$op" in 00:04:43.305 19:12:41 env -- scripts/common.sh@345 -- # : 1 00:04:43.305 19:12:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.305 19:12:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.305 19:12:41 env -- scripts/common.sh@365 -- # decimal 1 00:04:43.305 19:12:41 env -- scripts/common.sh@353 -- # local d=1 00:04:43.305 19:12:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.305 19:12:41 env -- scripts/common.sh@355 -- # echo 1 00:04:43.305 19:12:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.305 19:12:41 env -- scripts/common.sh@366 -- # decimal 2 00:04:43.305 19:12:41 env -- scripts/common.sh@353 -- # local d=2 00:04:43.305 19:12:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.305 19:12:41 env -- scripts/common.sh@355 -- # echo 2 00:04:43.305 19:12:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.305 19:12:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.305 19:12:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.305 19:12:41 env -- scripts/common.sh@368 -- # return 0 00:04:43.305 19:12:41 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.306 --rc genhtml_branch_coverage=1 00:04:43.306 --rc genhtml_function_coverage=1 00:04:43.306 --rc genhtml_legend=1 00:04:43.306 --rc geninfo_all_blocks=1 00:04:43.306 --rc geninfo_unexecuted_blocks=1 00:04:43.306 00:04:43.306 ' 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.306 --rc genhtml_branch_coverage=1 00:04:43.306 --rc genhtml_function_coverage=1 00:04:43.306 --rc genhtml_legend=1 00:04:43.306 --rc geninfo_all_blocks=1 00:04:43.306 --rc geninfo_unexecuted_blocks=1 00:04:43.306 00:04:43.306 ' 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.306 --rc genhtml_branch_coverage=1 00:04:43.306 --rc genhtml_function_coverage=1 00:04:43.306 --rc genhtml_legend=1 00:04:43.306 --rc geninfo_all_blocks=1 00:04:43.306 --rc geninfo_unexecuted_blocks=1 00:04:43.306 00:04:43.306 ' 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.306 --rc genhtml_branch_coverage=1 00:04:43.306 --rc genhtml_function_coverage=1 00:04:43.306 --rc genhtml_legend=1 00:04:43.306 --rc geninfo_all_blocks=1 00:04:43.306 --rc geninfo_unexecuted_blocks=1 00:04:43.306 00:04:43.306 ' 00:04:43.306 19:12:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.306 19:12:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.306 19:12:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.306 ************************************ 00:04:43.306 START TEST env_memory 00:04:43.306 ************************************ 00:04:43.306 19:12:41 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.565 00:04:43.565 00:04:43.565 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.565 http://cunit.sourceforge.net/ 00:04:43.565 00:04:43.565 00:04:43.565 Suite: memory 00:04:43.565 Test: alloc and free memory map ...[2024-11-26 19:12:41.784060] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.565 passed 00:04:43.565 Test: mem map translation ...[2024-11-26 19:12:41.815186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.565 [2024-11-26 19:12:41.815232] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.565 [2024-11-26 19:12:41.815288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.565 [2024-11-26 19:12:41.815299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.565 passed 00:04:43.565 Test: mem map registration ...[2024-11-26 19:12:41.879395] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:43.565 [2024-11-26 19:12:41.879442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:43.565 passed 00:04:43.565 Test: mem map adjacent registrations ...passed 00:04:43.565 00:04:43.565 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.565 suites 1 1 n/a 0 0 00:04:43.565 tests 4 4 4 0 0 00:04:43.565 asserts 152 152 152 0 n/a 00:04:43.565 00:04:43.565 Elapsed time = 0.211 seconds 00:04:43.565 00:04:43.565 real 0m0.227s 00:04:43.565 user 0m0.211s 00:04:43.565 sys 0m0.011s 00:04:43.565 19:12:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.565 19:12:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:43.565 ************************************ 00:04:43.565 END TEST env_memory 00:04:43.565 ************************************ 00:04:43.825 19:12:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.825 19:12:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.825 19:12:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.825 19:12:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.825 ************************************ 00:04:43.825 START TEST env_vtophys 00:04:43.825 ************************************ 00:04:43.825 19:12:42 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.825 EAL: lib.eal log level changed from notice to debug 00:04:43.825 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 1 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 2 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 3 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 4 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 5 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 6 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 7 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 8 as core 0 on socket 0 00:04:43.825 EAL: Detected lcore 9 as core 0 on socket 0 00:04:43.825 EAL: Maximum logical cores by configuration: 128 00:04:43.825 EAL: Detected CPU lcores: 10 00:04:43.825 EAL: Detected NUMA nodes: 1 00:04:43.825 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:43.825 EAL: Detected shared linkage of DPDK 00:04:43.825 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.825 EAL: Selected IOVA mode 'PA' 00:04:43.825 EAL: Probing VFIO support... 00:04:43.825 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.825 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:43.825 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.825 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.825 EAL: Setting up physically contiguous memory... 00:04:43.825 EAL: Setting maximum number of open files to 524288 00:04:43.825 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.825 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.825 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.825 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.825 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.825 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.825 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.825 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.825 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.825 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.825 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.825 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.825 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.825 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.825 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.825 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.825 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.825 EAL: Hugepages will be freed exactly as allocated. 00:04:43.825 EAL: No shared files mode enabled, IPC is disabled 00:04:43.825 EAL: No shared files mode enabled, IPC is disabled 00:04:43.825 EAL: TSC frequency is ~2200000 KHz 00:04:43.825 EAL: Main lcore 0 is ready (tid=7f88ff295a00;cpuset=[0]) 00:04:43.825 EAL: Trying to obtain current memory policy. 00:04:43.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.825 EAL: Restoring previous memory policy: 0 00:04:43.825 EAL: request: mp_malloc_sync 00:04:43.825 EAL: No shared files mode enabled, IPC is disabled 00:04:43.825 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.825 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.825 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.825 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.826 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:43.826 00:04:43.826 00:04:43.826 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.826 http://cunit.sourceforge.net/ 00:04:43.826 00:04:43.826 00:04:43.826 Suite: components_suite 00:04:43.826 Test: vtophys_malloc_test ...passed 00:04:43.826 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.826 EAL: Restoring previous memory policy: 4 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.826 EAL: request: mp_malloc_sync 00:04:43.826 EAL: No shared files mode enabled, IPC is disabled 00:04:43.826 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.826 EAL: Trying to obtain current memory policy. 00:04:43.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.085 EAL: Restoring previous memory policy: 4 00:04:44.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.085 EAL: request: mp_malloc_sync 00:04:44.085 EAL: No shared files mode enabled, IPC is disabled 00:04:44.085 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.085 EAL: request: mp_malloc_sync 00:04:44.085 EAL: No shared files mode enabled, IPC is disabled 00:04:44.085 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.085 EAL: Trying to obtain current memory policy. 00:04:44.085 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.085 EAL: Restoring previous memory policy: 4 00:04:44.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.085 EAL: request: mp_malloc_sync 00:04:44.085 EAL: No shared files mode enabled, IPC is disabled 00:04:44.085 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.085 EAL: request: mp_malloc_sync 00:04:44.085 EAL: No shared files mode enabled, IPC is disabled 00:04:44.085 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.085 EAL: Trying to obtain current memory policy. 00:04:44.085 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.345 EAL: Restoring previous memory policy: 4 00:04:44.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.345 EAL: request: mp_malloc_sync 00:04:44.345 EAL: No shared files mode enabled, IPC is disabled 00:04:44.345 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.604 EAL: request: mp_malloc_sync 00:04:44.604 EAL: No shared files mode enabled, IPC is disabled 00:04:44.604 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.604 EAL: Trying to obtain current memory policy. 00:04:44.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.864 EAL: Restoring previous memory policy: 4 00:04:44.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.864 EAL: request: mp_malloc_sync 00:04:44.864 EAL: No shared files mode enabled, IPC is disabled 00:04:44.864 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.122 passed 00:04:45.122 00:04:45.122 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.122 suites 1 1 n/a 0 0 00:04:45.122 tests 2 2 2 0 0 00:04:45.122 asserts 5554 5554 5554 0 n/a 00:04:45.122 00:04:45.122 Elapsed time = 1.293 seconds 00:04:45.122 EAL: request: mp_malloc_sync 00:04:45.122 EAL: No shared files mode enabled, IPC is disabled 00:04:45.122 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.122 EAL: request: mp_malloc_sync 00:04:45.122 EAL: No shared files mode enabled, IPC is disabled 00:04:45.122 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.122 EAL: No shared files mode enabled, IPC is disabled 00:04:45.122 EAL: No shared files mode enabled, IPC is disabled 00:04:45.122 EAL: No shared files mode enabled, IPC is disabled 00:04:45.122 00:04:45.122 real 0m1.502s 00:04:45.122 user 0m0.827s 00:04:45.122 sys 0m0.543s 00:04:45.122 19:12:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.122 19:12:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:45.122 ************************************ 00:04:45.122 END TEST env_vtophys 00:04:45.122 ************************************ 00:04:45.122 19:12:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.122 19:12:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.122 19:12:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.122 19:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.381 ************************************ 00:04:45.381 START TEST env_pci 00:04:45.381 ************************************ 00:04:45.381 19:12:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.381 00:04:45.381 00:04:45.381 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.381 http://cunit.sourceforge.net/ 00:04:45.381 00:04:45.381 00:04:45.381 Suite: pci 00:04:45.381 Test: pci_hook ...[2024-11-26 19:12:43.577152] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56520 has claimed it 00:04:45.381 passed 00:04:45.381 00:04:45.381 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.381 suites 1 1 n/a 0 0 00:04:45.381 tests 1 1 1 0 0 00:04:45.381 asserts 25 25 25 0 n/a 00:04:45.381 00:04:45.381 Elapsed time = 0.002 seconds 00:04:45.381 EAL: Cannot find device (10000:00:01.0) 00:04:45.381 EAL: Failed to attach device on primary process 00:04:45.381 00:04:45.381 real 0m0.016s 00:04:45.381 user 0m0.006s 00:04:45.382 sys 0m0.009s 00:04:45.382 19:12:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.382 19:12:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.382 ************************************ 00:04:45.382 END TEST env_pci 00:04:45.382 ************************************ 00:04:45.382 19:12:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.382 19:12:43 env -- env/env.sh@15 -- # uname 00:04:45.382 19:12:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.382 19:12:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.382 19:12:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.382 19:12:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:45.382 19:12:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.382 19:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.382 ************************************ 00:04:45.382 START TEST env_dpdk_post_init 00:04:45.382 ************************************ 00:04:45.382 19:12:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.382 EAL: Detected CPU lcores: 10 00:04:45.382 EAL: Detected NUMA nodes: 1 00:04:45.382 EAL: Detected shared linkage of DPDK 00:04:45.382 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.382 EAL: Selected IOVA mode 'PA' 00:04:45.382 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.382 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:45.382 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:45.642 Starting DPDK initialization... 00:04:45.642 Starting SPDK post initialization... 00:04:45.642 SPDK NVMe probe 00:04:45.642 Attaching to 0000:00:10.0 00:04:45.642 Attaching to 0000:00:11.0 00:04:45.642 Attached to 0000:00:10.0 00:04:45.642 Attached to 0000:00:11.0 00:04:45.642 Cleaning up... 00:04:45.642 00:04:45.642 real 0m0.188s 00:04:45.642 user 0m0.058s 00:04:45.642 sys 0m0.030s 00:04:45.642 19:12:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.642 19:12:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 ************************************ 00:04:45.642 END TEST env_dpdk_post_init 00:04:45.642 ************************************ 00:04:45.642 19:12:43 env -- env/env.sh@26 -- # uname 00:04:45.642 19:12:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:45.642 19:12:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.642 19:12:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.642 19:12:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.642 19:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 ************************************ 00:04:45.642 START TEST env_mem_callbacks 00:04:45.642 ************************************ 00:04:45.642 19:12:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.642 EAL: Detected CPU lcores: 10 00:04:45.642 EAL: Detected NUMA nodes: 1 00:04:45.642 EAL: Detected shared linkage of DPDK 00:04:45.642 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.642 EAL: Selected IOVA mode 'PA' 00:04:45.642 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.642 00:04:45.642 00:04:45.642 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.642 http://cunit.sourceforge.net/ 00:04:45.642 00:04:45.642 00:04:45.642 Suite: memory 00:04:45.642 Test: test ... 00:04:45.642 register 0x200000200000 2097152 00:04:45.642 malloc 3145728 00:04:45.642 register 0x200000400000 4194304 00:04:45.642 buf 0x200000500000 len 3145728 PASSED 00:04:45.642 malloc 64 00:04:45.642 buf 0x2000004fff40 len 64 PASSED 00:04:45.642 malloc 4194304 00:04:45.642 register 0x200000800000 6291456 00:04:45.642 buf 0x200000a00000 len 4194304 PASSED 00:04:45.642 free 0x200000500000 3145728 00:04:45.642 free 0x2000004fff40 64 00:04:45.642 unregister 0x200000400000 4194304 PASSED 00:04:45.642 free 0x200000a00000 4194304 00:04:45.642 unregister 0x200000800000 6291456 PASSED 00:04:45.642 malloc 8388608 00:04:45.642 register 0x200000400000 10485760 00:04:45.642 buf 0x200000600000 len 8388608 PASSED 00:04:45.642 free 0x200000600000 8388608 00:04:45.642 unregister 0x200000400000 10485760 PASSED 00:04:45.642 passed 00:04:45.642 00:04:45.642 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.642 suites 1 1 n/a 0 0 00:04:45.642 tests 1 1 1 0 0 00:04:45.642 asserts 15 15 15 0 n/a 00:04:45.642 00:04:45.642 Elapsed time = 0.009 seconds 00:04:45.642 00:04:45.642 real 0m0.144s 00:04:45.642 user 0m0.018s 00:04:45.642 sys 0m0.025s 00:04:45.642 19:12:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.642 19:12:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 ************************************ 00:04:45.642 END TEST env_mem_callbacks 00:04:45.642 ************************************ 00:04:45.642 00:04:45.642 real 0m2.563s 00:04:45.642 user 0m1.361s 00:04:45.642 sys 0m0.851s 00:04:45.642 19:12:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.642 ************************************ 00:04:45.642 END TEST env 00:04:45.642 19:12:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 ************************************ 00:04:45.901 19:12:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.901 19:12:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.901 19:12:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.901 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:45.901 ************************************ 00:04:45.901 START TEST rpc 00:04:45.901 ************************************ 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.901 * Looking for test storage... 00:04:45.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.901 19:12:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.901 19:12:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.901 19:12:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.901 19:12:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.901 19:12:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.901 19:12:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.901 19:12:44 rpc -- scripts/common.sh@345 -- # : 1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.901 19:12:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.901 19:12:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.901 19:12:44 rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.901 19:12:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.901 19:12:44 rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.901 19:12:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.901 19:12:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.901 19:12:44 rpc -- scripts/common.sh@368 -- # return 0 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.901 --rc genhtml_branch_coverage=1 00:04:45.901 --rc genhtml_function_coverage=1 00:04:45.901 --rc genhtml_legend=1 00:04:45.901 --rc geninfo_all_blocks=1 00:04:45.901 --rc geninfo_unexecuted_blocks=1 00:04:45.901 00:04:45.901 ' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.901 --rc genhtml_branch_coverage=1 00:04:45.901 --rc genhtml_function_coverage=1 00:04:45.901 --rc genhtml_legend=1 00:04:45.901 --rc geninfo_all_blocks=1 00:04:45.901 --rc geninfo_unexecuted_blocks=1 00:04:45.901 00:04:45.901 ' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.901 --rc genhtml_branch_coverage=1 00:04:45.901 --rc genhtml_function_coverage=1 00:04:45.901 --rc genhtml_legend=1 00:04:45.901 --rc geninfo_all_blocks=1 00:04:45.901 --rc geninfo_unexecuted_blocks=1 00:04:45.901 00:04:45.901 ' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.901 --rc genhtml_branch_coverage=1 00:04:45.901 --rc genhtml_function_coverage=1 00:04:45.901 --rc genhtml_legend=1 00:04:45.901 --rc geninfo_all_blocks=1 00:04:45.901 --rc geninfo_unexecuted_blocks=1 00:04:45.901 00:04:45.901 ' 00:04:45.901 19:12:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56638 00:04:45.901 19:12:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.901 19:12:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56638 00:04:45.901 19:12:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 56638 ']' 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.901 19:12:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 [2024-11-26 19:12:44.368325] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:04:46.160 [2024-11-26 19:12:44.368872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56638 ] 00:04:46.160 [2024-11-26 19:12:44.520467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.160 [2024-11-26 19:12:44.574835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:46.160 [2024-11-26 19:12:44.574974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56638' to capture a snapshot of events at runtime. 00:04:46.160 [2024-11-26 19:12:44.574994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.160 [2024-11-26 19:12:44.575005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.160 [2024-11-26 19:12:44.575014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56638 for offline analysis/debug. 00:04:46.160 [2024-11-26 19:12:44.575536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.440 [2024-11-26 19:12:44.656180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:46.440 19:12:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.440 19:12:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.440 19:12:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.440 19:12:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.440 19:12:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.440 19:12:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.440 19:12:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.440 19:12:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.440 19:12:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 ************************************ 00:04:46.699 START TEST rpc_integrity 00:04:46.699 ************************************ 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 19:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.699 { 00:04:46.699 "name": "Malloc0", 00:04:46.699 "aliases": [ 00:04:46.699 "80493ff7-8f1d-4a66-9cae-1305bd9625db" 00:04:46.699 ], 00:04:46.699 "product_name": "Malloc disk", 00:04:46.699 "block_size": 512, 00:04:46.699 "num_blocks": 16384, 00:04:46.699 "uuid": "80493ff7-8f1d-4a66-9cae-1305bd9625db", 00:04:46.699 "assigned_rate_limits": { 00:04:46.699 "rw_ios_per_sec": 0, 00:04:46.699 "rw_mbytes_per_sec": 0, 00:04:46.699 "r_mbytes_per_sec": 0, 00:04:46.699 "w_mbytes_per_sec": 0 00:04:46.699 }, 00:04:46.699 "claimed": false, 00:04:46.699 "zoned": false, 00:04:46.699 "supported_io_types": { 00:04:46.699 "read": true, 00:04:46.699 "write": true, 00:04:46.699 "unmap": true, 00:04:46.699 "flush": true, 00:04:46.699 "reset": true, 00:04:46.699 "nvme_admin": false, 00:04:46.699 "nvme_io": false, 00:04:46.699 "nvme_io_md": false, 00:04:46.699 "write_zeroes": true, 00:04:46.699 "zcopy": true, 00:04:46.699 "get_zone_info": false, 00:04:46.699 "zone_management": false, 00:04:46.699 "zone_append": false, 00:04:46.699 "compare": false, 00:04:46.699 "compare_and_write": false, 00:04:46.699 "abort": true, 00:04:46.699 "seek_hole": false, 00:04:46.699 "seek_data": false, 00:04:46.699 "copy": true, 00:04:46.699 "nvme_iov_md": false 00:04:46.699 }, 00:04:46.699 "memory_domains": [ 00:04:46.699 { 00:04:46.699 "dma_device_id": "system", 00:04:46.699 "dma_device_type": 1 00:04:46.699 }, 00:04:46.699 { 00:04:46.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.699 "dma_device_type": 2 00:04:46.699 } 00:04:46.699 ], 00:04:46.699 "driver_specific": {} 00:04:46.699 } 00:04:46.699 ]' 00:04:46.699 19:12:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.699 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.699 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 [2024-11-26 19:12:45.040070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.699 [2024-11-26 19:12:45.040264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.699 [2024-11-26 19:12:45.040311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb86050 00:04:46.699 [2024-11-26 19:12:45.040321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.699 [2024-11-26 19:12:45.041896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.699 [2024-11-26 19:12:45.041985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.699 Passthru0 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.699 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.699 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.699 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.699 { 00:04:46.699 "name": "Malloc0", 00:04:46.699 "aliases": [ 00:04:46.699 "80493ff7-8f1d-4a66-9cae-1305bd9625db" 00:04:46.699 ], 00:04:46.699 "product_name": "Malloc disk", 00:04:46.699 "block_size": 512, 00:04:46.699 "num_blocks": 16384, 00:04:46.699 "uuid": "80493ff7-8f1d-4a66-9cae-1305bd9625db", 00:04:46.699 "assigned_rate_limits": { 00:04:46.699 "rw_ios_per_sec": 0, 00:04:46.699 "rw_mbytes_per_sec": 0, 00:04:46.699 "r_mbytes_per_sec": 0, 00:04:46.699 "w_mbytes_per_sec": 0 00:04:46.699 }, 00:04:46.699 "claimed": true, 00:04:46.699 "claim_type": "exclusive_write", 00:04:46.699 "zoned": false, 00:04:46.699 "supported_io_types": { 00:04:46.700 "read": true, 00:04:46.700 "write": true, 00:04:46.700 "unmap": true, 00:04:46.700 "flush": true, 00:04:46.700 "reset": true, 00:04:46.700 "nvme_admin": false, 00:04:46.700 "nvme_io": false, 00:04:46.700 "nvme_io_md": false, 00:04:46.700 "write_zeroes": true, 00:04:46.700 "zcopy": true, 00:04:46.700 "get_zone_info": false, 00:04:46.700 "zone_management": false, 00:04:46.700 "zone_append": false, 00:04:46.700 "compare": false, 00:04:46.700 "compare_and_write": false, 00:04:46.700 "abort": true, 00:04:46.700 "seek_hole": false, 00:04:46.700 "seek_data": false, 00:04:46.700 "copy": true, 00:04:46.700 "nvme_iov_md": false 00:04:46.700 }, 00:04:46.700 "memory_domains": [ 00:04:46.700 { 00:04:46.700 "dma_device_id": "system", 00:04:46.700 "dma_device_type": 1 00:04:46.700 }, 00:04:46.700 { 00:04:46.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.700 "dma_device_type": 2 00:04:46.700 } 00:04:46.700 ], 00:04:46.700 "driver_specific": {} 00:04:46.700 }, 00:04:46.700 { 00:04:46.700 "name": "Passthru0", 00:04:46.700 "aliases": [ 00:04:46.700 "4a132df4-cd8a-5dbb-b9aa-7a304e8f4a4a" 00:04:46.700 ], 00:04:46.700 "product_name": "passthru", 00:04:46.700 "block_size": 512, 00:04:46.700 "num_blocks": 16384, 00:04:46.700 "uuid": "4a132df4-cd8a-5dbb-b9aa-7a304e8f4a4a", 00:04:46.700 "assigned_rate_limits": { 00:04:46.700 "rw_ios_per_sec": 0, 00:04:46.700 "rw_mbytes_per_sec": 0, 00:04:46.700 "r_mbytes_per_sec": 0, 00:04:46.700 "w_mbytes_per_sec": 0 00:04:46.700 }, 00:04:46.700 "claimed": false, 00:04:46.700 "zoned": false, 00:04:46.700 "supported_io_types": { 00:04:46.700 "read": true, 00:04:46.700 "write": true, 00:04:46.700 "unmap": true, 00:04:46.700 "flush": true, 00:04:46.700 "reset": true, 00:04:46.700 "nvme_admin": false, 00:04:46.700 "nvme_io": false, 00:04:46.700 "nvme_io_md": false, 00:04:46.700 "write_zeroes": true, 00:04:46.700 "zcopy": true, 00:04:46.700 "get_zone_info": false, 00:04:46.700 "zone_management": false, 00:04:46.700 "zone_append": false, 00:04:46.700 "compare": false, 00:04:46.700 "compare_and_write": false, 00:04:46.700 "abort": true, 00:04:46.700 "seek_hole": false, 00:04:46.700 "seek_data": false, 00:04:46.700 "copy": true, 00:04:46.700 "nvme_iov_md": false 00:04:46.700 }, 00:04:46.700 "memory_domains": [ 00:04:46.700 { 00:04:46.700 "dma_device_id": "system", 00:04:46.700 "dma_device_type": 1 00:04:46.700 }, 00:04:46.700 { 00:04:46.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.700 "dma_device_type": 2 00:04:46.700 } 00:04:46.700 ], 00:04:46.700 "driver_specific": { 00:04:46.700 "passthru": { 00:04:46.700 "name": "Passthru0", 00:04:46.700 "base_bdev_name": "Malloc0" 00:04:46.700 } 00:04:46.700 } 00:04:46.700 } 00:04:46.700 ]' 00:04:46.700 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.700 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.700 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.700 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.700 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.700 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.700 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.700 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.700 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.958 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.958 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.958 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.958 ************************************ 00:04:46.958 END TEST rpc_integrity 00:04:46.958 ************************************ 00:04:46.958 19:12:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.958 00:04:46.958 real 0m0.326s 00:04:46.958 user 0m0.217s 00:04:46.958 sys 0m0.040s 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.958 19:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 19:12:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.958 19:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.958 19:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.958 19:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 ************************************ 00:04:46.958 START TEST rpc_plugins 00:04:46.958 ************************************ 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:46.958 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.958 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.958 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.958 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.958 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.958 { 00:04:46.958 "name": "Malloc1", 00:04:46.958 "aliases": [ 00:04:46.958 "d36f3833-f221-4e80-8655-4b1ef9c3aaaa" 00:04:46.958 ], 00:04:46.958 "product_name": "Malloc disk", 00:04:46.958 "block_size": 4096, 00:04:46.958 "num_blocks": 256, 00:04:46.958 "uuid": "d36f3833-f221-4e80-8655-4b1ef9c3aaaa", 00:04:46.958 "assigned_rate_limits": { 00:04:46.958 "rw_ios_per_sec": 0, 00:04:46.958 "rw_mbytes_per_sec": 0, 00:04:46.958 "r_mbytes_per_sec": 0, 00:04:46.958 "w_mbytes_per_sec": 0 00:04:46.958 }, 00:04:46.958 "claimed": false, 00:04:46.958 "zoned": false, 00:04:46.958 "supported_io_types": { 00:04:46.958 "read": true, 00:04:46.958 "write": true, 00:04:46.958 "unmap": true, 00:04:46.958 "flush": true, 00:04:46.958 "reset": true, 00:04:46.958 "nvme_admin": false, 00:04:46.958 "nvme_io": false, 00:04:46.958 "nvme_io_md": false, 00:04:46.958 "write_zeroes": true, 00:04:46.959 "zcopy": true, 00:04:46.959 "get_zone_info": false, 00:04:46.959 "zone_management": false, 00:04:46.959 "zone_append": false, 00:04:46.959 "compare": false, 00:04:46.959 "compare_and_write": false, 00:04:46.959 "abort": true, 00:04:46.959 "seek_hole": false, 00:04:46.959 "seek_data": false, 00:04:46.959 "copy": true, 00:04:46.959 "nvme_iov_md": false 00:04:46.959 }, 00:04:46.959 "memory_domains": [ 00:04:46.959 { 00:04:46.959 "dma_device_id": "system", 00:04:46.959 "dma_device_type": 1 00:04:46.959 }, 00:04:46.959 { 00:04:46.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.959 "dma_device_type": 2 00:04:46.959 } 00:04:46.959 ], 00:04:46.959 "driver_specific": {} 00:04:46.959 } 00:04:46.959 ]' 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.959 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.959 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:47.216 ************************************ 00:04:47.216 END TEST rpc_plugins 00:04:47.216 ************************************ 00:04:47.216 19:12:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:47.216 00:04:47.216 real 0m0.173s 00:04:47.216 user 0m0.115s 00:04:47.216 sys 0m0.016s 00:04:47.216 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.216 19:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.216 19:12:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:47.216 19:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.216 19:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.216 19:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.216 ************************************ 00:04:47.216 START TEST rpc_trace_cmd_test 00:04:47.216 ************************************ 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.216 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:47.217 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56638", 00:04:47.217 "tpoint_group_mask": "0x8", 00:04:47.217 "iscsi_conn": { 00:04:47.217 "mask": "0x2", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "scsi": { 00:04:47.217 "mask": "0x4", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "bdev": { 00:04:47.217 "mask": "0x8", 00:04:47.217 "tpoint_mask": "0xffffffffffffffff" 00:04:47.217 }, 00:04:47.217 "nvmf_rdma": { 00:04:47.217 "mask": "0x10", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "nvmf_tcp": { 00:04:47.217 "mask": "0x20", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "ftl": { 00:04:47.217 "mask": "0x40", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "blobfs": { 00:04:47.217 "mask": "0x80", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "dsa": { 00:04:47.217 "mask": "0x200", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "thread": { 00:04:47.217 "mask": "0x400", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "nvme_pcie": { 00:04:47.217 "mask": "0x800", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "iaa": { 00:04:47.217 "mask": "0x1000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "nvme_tcp": { 00:04:47.217 "mask": "0x2000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "bdev_nvme": { 00:04:47.217 "mask": "0x4000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "sock": { 00:04:47.217 "mask": "0x8000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "blob": { 00:04:47.217 "mask": "0x10000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "bdev_raid": { 00:04:47.217 "mask": "0x20000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 }, 00:04:47.217 "scheduler": { 00:04:47.217 "mask": "0x40000", 00:04:47.217 "tpoint_mask": "0x0" 00:04:47.217 } 00:04:47.217 }' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:47.217 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:47.475 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:47.475 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:47.475 ************************************ 00:04:47.475 END TEST rpc_trace_cmd_test 00:04:47.475 ************************************ 00:04:47.475 19:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:47.475 00:04:47.475 real 0m0.285s 00:04:47.475 user 0m0.241s 00:04:47.475 sys 0m0.033s 00:04:47.475 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.475 19:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.475 19:12:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:47.475 19:12:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:47.475 19:12:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:47.475 19:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.475 19:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.475 19:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.475 ************************************ 00:04:47.475 START TEST rpc_daemon_integrity 00:04:47.475 ************************************ 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.475 { 00:04:47.475 "name": "Malloc2", 00:04:47.475 "aliases": [ 00:04:47.475 "85aa6b00-0c1d-4c74-9c41-d30419147a80" 00:04:47.475 ], 00:04:47.475 "product_name": "Malloc disk", 00:04:47.475 "block_size": 512, 00:04:47.475 "num_blocks": 16384, 00:04:47.475 "uuid": "85aa6b00-0c1d-4c74-9c41-d30419147a80", 00:04:47.475 "assigned_rate_limits": { 00:04:47.475 "rw_ios_per_sec": 0, 00:04:47.475 "rw_mbytes_per_sec": 0, 00:04:47.475 "r_mbytes_per_sec": 0, 00:04:47.475 "w_mbytes_per_sec": 0 00:04:47.475 }, 00:04:47.475 "claimed": false, 00:04:47.475 "zoned": false, 00:04:47.475 "supported_io_types": { 00:04:47.475 "read": true, 00:04:47.475 "write": true, 00:04:47.475 "unmap": true, 00:04:47.475 "flush": true, 00:04:47.475 "reset": true, 00:04:47.475 "nvme_admin": false, 00:04:47.475 "nvme_io": false, 00:04:47.475 "nvme_io_md": false, 00:04:47.475 "write_zeroes": true, 00:04:47.475 "zcopy": true, 00:04:47.475 "get_zone_info": false, 00:04:47.475 "zone_management": false, 00:04:47.475 "zone_append": false, 00:04:47.475 "compare": false, 00:04:47.475 "compare_and_write": false, 00:04:47.475 "abort": true, 00:04:47.475 "seek_hole": false, 00:04:47.475 "seek_data": false, 00:04:47.475 "copy": true, 00:04:47.475 "nvme_iov_md": false 00:04:47.475 }, 00:04:47.475 "memory_domains": [ 00:04:47.475 { 00:04:47.475 "dma_device_id": "system", 00:04:47.475 "dma_device_type": 1 00:04:47.475 }, 00:04:47.475 { 00:04:47.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.475 "dma_device_type": 2 00:04:47.475 } 00:04:47.475 ], 00:04:47.475 "driver_specific": {} 00:04:47.475 } 00:04:47.475 ]' 00:04:47.475 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.734 [2024-11-26 19:12:45.964979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:47.734 [2024-11-26 19:12:45.965201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.734 [2024-11-26 19:12:45.965228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb91030 00:04:47.734 [2024-11-26 19:12:45.965238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.734 [2024-11-26 19:12:45.966701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.734 [2024-11-26 19:12:45.966735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.734 Passthru0 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.734 19:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.734 { 00:04:47.734 "name": "Malloc2", 00:04:47.734 "aliases": [ 00:04:47.734 "85aa6b00-0c1d-4c74-9c41-d30419147a80" 00:04:47.734 ], 00:04:47.734 "product_name": "Malloc disk", 00:04:47.734 "block_size": 512, 00:04:47.734 "num_blocks": 16384, 00:04:47.734 "uuid": "85aa6b00-0c1d-4c74-9c41-d30419147a80", 00:04:47.734 "assigned_rate_limits": { 00:04:47.734 "rw_ios_per_sec": 0, 00:04:47.734 "rw_mbytes_per_sec": 0, 00:04:47.734 "r_mbytes_per_sec": 0, 00:04:47.734 "w_mbytes_per_sec": 0 00:04:47.734 }, 00:04:47.734 "claimed": true, 00:04:47.734 "claim_type": "exclusive_write", 00:04:47.734 "zoned": false, 00:04:47.734 "supported_io_types": { 00:04:47.734 "read": true, 00:04:47.734 "write": true, 00:04:47.734 "unmap": true, 00:04:47.734 "flush": true, 00:04:47.734 "reset": true, 00:04:47.734 "nvme_admin": false, 00:04:47.734 "nvme_io": false, 00:04:47.734 "nvme_io_md": false, 00:04:47.734 "write_zeroes": true, 00:04:47.734 "zcopy": true, 00:04:47.734 "get_zone_info": false, 00:04:47.734 "zone_management": false, 00:04:47.734 "zone_append": false, 00:04:47.734 "compare": false, 00:04:47.734 "compare_and_write": false, 00:04:47.734 "abort": true, 00:04:47.734 "seek_hole": false, 00:04:47.734 "seek_data": false, 00:04:47.734 "copy": true, 00:04:47.734 "nvme_iov_md": false 00:04:47.734 }, 00:04:47.734 "memory_domains": [ 00:04:47.734 { 00:04:47.734 "dma_device_id": "system", 00:04:47.734 "dma_device_type": 1 00:04:47.734 }, 00:04:47.734 { 00:04:47.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.734 "dma_device_type": 2 00:04:47.734 } 00:04:47.734 ], 00:04:47.734 "driver_specific": {} 00:04:47.734 }, 00:04:47.734 { 00:04:47.734 "name": "Passthru0", 00:04:47.734 "aliases": [ 00:04:47.734 "2a2ccb7e-6d69-5a13-bac0-f166c8455687" 00:04:47.734 ], 00:04:47.734 "product_name": "passthru", 00:04:47.734 "block_size": 512, 00:04:47.734 "num_blocks": 16384, 00:04:47.734 "uuid": "2a2ccb7e-6d69-5a13-bac0-f166c8455687", 00:04:47.734 "assigned_rate_limits": { 00:04:47.734 "rw_ios_per_sec": 0, 00:04:47.734 "rw_mbytes_per_sec": 0, 00:04:47.734 "r_mbytes_per_sec": 0, 00:04:47.734 "w_mbytes_per_sec": 0 00:04:47.734 }, 00:04:47.734 "claimed": false, 00:04:47.734 "zoned": false, 00:04:47.734 "supported_io_types": { 00:04:47.734 "read": true, 00:04:47.734 "write": true, 00:04:47.734 "unmap": true, 00:04:47.734 "flush": true, 00:04:47.734 "reset": true, 00:04:47.734 "nvme_admin": false, 00:04:47.734 "nvme_io": false, 00:04:47.734 "nvme_io_md": false, 00:04:47.734 "write_zeroes": true, 00:04:47.734 "zcopy": true, 00:04:47.734 "get_zone_info": false, 00:04:47.734 "zone_management": false, 00:04:47.734 "zone_append": false, 00:04:47.734 "compare": false, 00:04:47.734 "compare_and_write": false, 00:04:47.734 "abort": true, 00:04:47.734 "seek_hole": false, 00:04:47.734 "seek_data": false, 00:04:47.734 "copy": true, 00:04:47.734 "nvme_iov_md": false 00:04:47.734 }, 00:04:47.734 "memory_domains": [ 00:04:47.734 { 00:04:47.734 "dma_device_id": "system", 00:04:47.734 "dma_device_type": 1 00:04:47.734 }, 00:04:47.734 { 00:04:47.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.734 "dma_device_type": 2 00:04:47.734 } 00:04:47.734 ], 00:04:47.734 "driver_specific": { 00:04:47.734 "passthru": { 00:04:47.734 "name": "Passthru0", 00:04:47.734 "base_bdev_name": "Malloc2" 00:04:47.734 } 00:04:47.734 } 00:04:47.734 } 00:04:47.734 ]' 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.734 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.735 ************************************ 00:04:47.735 END TEST rpc_daemon_integrity 00:04:47.735 ************************************ 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.735 00:04:47.735 real 0m0.331s 00:04:47.735 user 0m0.217s 00:04:47.735 sys 0m0.044s 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.735 19:12:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.993 19:12:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.993 19:12:46 rpc -- rpc/rpc.sh@84 -- # killprocess 56638 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 56638 ']' 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@958 -- # kill -0 56638 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56638 00:04:47.993 killing process with pid 56638 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56638' 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@973 -- # kill 56638 00:04:47.993 19:12:46 rpc -- common/autotest_common.sh@978 -- # wait 56638 00:04:48.251 00:04:48.251 real 0m2.492s 00:04:48.251 user 0m3.125s 00:04:48.251 sys 0m0.714s 00:04:48.251 19:12:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.251 ************************************ 00:04:48.251 END TEST rpc 00:04:48.251 ************************************ 00:04:48.251 19:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.251 19:12:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.251 19:12:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.251 19:12:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.251 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.251 ************************************ 00:04:48.251 START TEST skip_rpc 00:04:48.251 ************************************ 00:04:48.251 19:12:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.517 * Looking for test storage... 00:04:48.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.517 19:12:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.517 --rc genhtml_branch_coverage=1 00:04:48.517 --rc genhtml_function_coverage=1 00:04:48.517 --rc genhtml_legend=1 00:04:48.517 --rc geninfo_all_blocks=1 00:04:48.517 --rc geninfo_unexecuted_blocks=1 00:04:48.517 00:04:48.517 ' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.517 --rc genhtml_branch_coverage=1 00:04:48.517 --rc genhtml_function_coverage=1 00:04:48.517 --rc genhtml_legend=1 00:04:48.517 --rc geninfo_all_blocks=1 00:04:48.517 --rc geninfo_unexecuted_blocks=1 00:04:48.517 00:04:48.517 ' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.517 --rc genhtml_branch_coverage=1 00:04:48.517 --rc genhtml_function_coverage=1 00:04:48.517 --rc genhtml_legend=1 00:04:48.517 --rc geninfo_all_blocks=1 00:04:48.517 --rc geninfo_unexecuted_blocks=1 00:04:48.517 00:04:48.517 ' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.517 --rc genhtml_branch_coverage=1 00:04:48.517 --rc genhtml_function_coverage=1 00:04:48.517 --rc genhtml_legend=1 00:04:48.517 --rc geninfo_all_blocks=1 00:04:48.517 --rc geninfo_unexecuted_blocks=1 00:04:48.517 00:04:48.517 ' 00:04:48.517 19:12:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.517 19:12:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.517 19:12:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.517 19:12:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.517 ************************************ 00:04:48.517 START TEST skip_rpc 00:04:48.517 ************************************ 00:04:48.517 19:12:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:48.517 19:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56836 00:04:48.517 19:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.517 19:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.517 19:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.786 [2024-11-26 19:12:46.971090] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:04:48.786 [2024-11-26 19:12:46.971320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56836 ] 00:04:48.786 [2024-11-26 19:12:47.113746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.786 [2024-11-26 19:12:47.162115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.044 [2024-11-26 19:12:47.233829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.319 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56836 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56836 ']' 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56836 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56836 00:04:54.320 killing process with pid 56836 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56836' 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56836 00:04:54.320 19:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56836 00:04:54.320 00:04:54.320 real 0m5.424s 00:04:54.320 user 0m5.053s 00:04:54.320 sys 0m0.285s 00:04:54.320 19:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.320 ************************************ 00:04:54.320 END TEST skip_rpc 00:04:54.320 ************************************ 00:04:54.320 19:12:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 19:12:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:54.320 19:12:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.320 19:12:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.320 19:12:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 ************************************ 00:04:54.320 START TEST skip_rpc_with_json 00:04:54.320 ************************************ 00:04:54.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56923 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56923 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56923 ']' 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.320 19:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 [2024-11-26 19:12:52.460737] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:04:54.320 [2024-11-26 19:12:52.461040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56923 ] 00:04:54.320 [2024-11-26 19:12:52.603915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.320 [2024-11-26 19:12:52.655919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.320 [2024-11-26 19:12:52.731172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.259 [2024-11-26 19:12:53.425503] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:55.259 request: 00:04:55.259 { 00:04:55.259 "trtype": "tcp", 00:04:55.259 "method": "nvmf_get_transports", 00:04:55.259 "req_id": 1 00:04:55.259 } 00:04:55.259 Got JSON-RPC error response 00:04:55.259 response: 00:04:55.259 { 00:04:55.259 "code": -19, 00:04:55.259 "message": "No such device" 00:04:55.259 } 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.259 [2024-11-26 19:12:53.437608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.259 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.259 { 00:04:55.259 "subsystems": [ 00:04:55.259 { 00:04:55.259 "subsystem": "fsdev", 00:04:55.259 "config": [ 00:04:55.259 { 00:04:55.259 "method": "fsdev_set_opts", 00:04:55.259 "params": { 00:04:55.259 "fsdev_io_pool_size": 65535, 00:04:55.259 "fsdev_io_cache_size": 256 00:04:55.259 } 00:04:55.259 } 00:04:55.259 ] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "keyring", 00:04:55.259 "config": [] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "iobuf", 00:04:55.259 "config": [ 00:04:55.259 { 00:04:55.259 "method": "iobuf_set_options", 00:04:55.259 "params": { 00:04:55.259 "small_pool_count": 8192, 00:04:55.259 "large_pool_count": 1024, 00:04:55.259 "small_bufsize": 8192, 00:04:55.259 "large_bufsize": 135168, 00:04:55.259 "enable_numa": false 00:04:55.259 } 00:04:55.259 } 00:04:55.259 ] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "sock", 00:04:55.259 "config": [ 00:04:55.259 { 00:04:55.259 "method": "sock_set_default_impl", 00:04:55.259 "params": { 00:04:55.259 "impl_name": "uring" 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "sock_impl_set_options", 00:04:55.259 "params": { 00:04:55.259 "impl_name": "ssl", 00:04:55.259 "recv_buf_size": 4096, 00:04:55.259 "send_buf_size": 4096, 00:04:55.259 "enable_recv_pipe": true, 00:04:55.259 "enable_quickack": false, 00:04:55.259 "enable_placement_id": 0, 00:04:55.259 "enable_zerocopy_send_server": true, 00:04:55.259 "enable_zerocopy_send_client": false, 00:04:55.259 "zerocopy_threshold": 0, 00:04:55.259 "tls_version": 0, 00:04:55.259 "enable_ktls": false 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "sock_impl_set_options", 00:04:55.259 "params": { 00:04:55.259 "impl_name": "posix", 00:04:55.259 "recv_buf_size": 2097152, 00:04:55.259 "send_buf_size": 2097152, 00:04:55.259 "enable_recv_pipe": true, 00:04:55.259 "enable_quickack": false, 00:04:55.259 "enable_placement_id": 0, 00:04:55.259 "enable_zerocopy_send_server": true, 00:04:55.259 "enable_zerocopy_send_client": false, 00:04:55.259 "zerocopy_threshold": 0, 00:04:55.259 "tls_version": 0, 00:04:55.259 "enable_ktls": false 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "sock_impl_set_options", 00:04:55.259 "params": { 00:04:55.259 "impl_name": "uring", 00:04:55.259 "recv_buf_size": 2097152, 00:04:55.259 "send_buf_size": 2097152, 00:04:55.259 "enable_recv_pipe": true, 00:04:55.259 "enable_quickack": false, 00:04:55.259 "enable_placement_id": 0, 00:04:55.259 "enable_zerocopy_send_server": false, 00:04:55.259 "enable_zerocopy_send_client": false, 00:04:55.259 "zerocopy_threshold": 0, 00:04:55.259 "tls_version": 0, 00:04:55.259 "enable_ktls": false 00:04:55.259 } 00:04:55.259 } 00:04:55.259 ] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "vmd", 00:04:55.259 "config": [] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "accel", 00:04:55.259 "config": [ 00:04:55.259 { 00:04:55.259 "method": "accel_set_options", 00:04:55.259 "params": { 00:04:55.259 "small_cache_size": 128, 00:04:55.259 "large_cache_size": 16, 00:04:55.259 "task_count": 2048, 00:04:55.259 "sequence_count": 2048, 00:04:55.259 "buf_count": 2048 00:04:55.259 } 00:04:55.259 } 00:04:55.259 ] 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "subsystem": "bdev", 00:04:55.259 "config": [ 00:04:55.259 { 00:04:55.259 "method": "bdev_set_options", 00:04:55.259 "params": { 00:04:55.259 "bdev_io_pool_size": 65535, 00:04:55.259 "bdev_io_cache_size": 256, 00:04:55.259 "bdev_auto_examine": true, 00:04:55.259 "iobuf_small_cache_size": 128, 00:04:55.259 "iobuf_large_cache_size": 16 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "bdev_raid_set_options", 00:04:55.259 "params": { 00:04:55.259 "process_window_size_kb": 1024, 00:04:55.259 "process_max_bandwidth_mb_sec": 0 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "bdev_iscsi_set_options", 00:04:55.259 "params": { 00:04:55.259 "timeout_sec": 30 00:04:55.259 } 00:04:55.259 }, 00:04:55.259 { 00:04:55.259 "method": "bdev_nvme_set_options", 00:04:55.259 "params": { 00:04:55.259 "action_on_timeout": "none", 00:04:55.259 "timeout_us": 0, 00:04:55.259 "timeout_admin_us": 0, 00:04:55.259 "keep_alive_timeout_ms": 10000, 00:04:55.259 "arbitration_burst": 0, 00:04:55.259 "low_priority_weight": 0, 00:04:55.259 "medium_priority_weight": 0, 00:04:55.259 "high_priority_weight": 0, 00:04:55.259 "nvme_adminq_poll_period_us": 10000, 00:04:55.259 "nvme_ioq_poll_period_us": 0, 00:04:55.259 "io_queue_requests": 0, 00:04:55.259 "delay_cmd_submit": true, 00:04:55.259 "transport_retry_count": 4, 00:04:55.259 "bdev_retry_count": 3, 00:04:55.259 "transport_ack_timeout": 0, 00:04:55.259 "ctrlr_loss_timeout_sec": 0, 00:04:55.260 "reconnect_delay_sec": 0, 00:04:55.260 "fast_io_fail_timeout_sec": 0, 00:04:55.260 "disable_auto_failback": false, 00:04:55.260 "generate_uuids": false, 00:04:55.260 "transport_tos": 0, 00:04:55.260 "nvme_error_stat": false, 00:04:55.260 "rdma_srq_size": 0, 00:04:55.260 "io_path_stat": false, 00:04:55.260 "allow_accel_sequence": false, 00:04:55.260 "rdma_max_cq_size": 0, 00:04:55.260 "rdma_cm_event_timeout_ms": 0, 00:04:55.260 "dhchap_digests": [ 00:04:55.260 "sha256", 00:04:55.260 "sha384", 00:04:55.260 "sha512" 00:04:55.260 ], 00:04:55.260 "dhchap_dhgroups": [ 00:04:55.260 "null", 00:04:55.260 "ffdhe2048", 00:04:55.260 "ffdhe3072", 00:04:55.260 "ffdhe4096", 00:04:55.260 "ffdhe6144", 00:04:55.260 "ffdhe8192" 00:04:55.260 ] 00:04:55.260 } 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "method": "bdev_nvme_set_hotplug", 00:04:55.260 "params": { 00:04:55.260 "period_us": 100000, 00:04:55.260 "enable": false 00:04:55.260 } 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "method": "bdev_wait_for_examine" 00:04:55.260 } 00:04:55.260 ] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "scsi", 00:04:55.260 "config": null 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "scheduler", 00:04:55.260 "config": [ 00:04:55.260 { 00:04:55.260 "method": "framework_set_scheduler", 00:04:55.260 "params": { 00:04:55.260 "name": "static" 00:04:55.260 } 00:04:55.260 } 00:04:55.260 ] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "vhost_scsi", 00:04:55.260 "config": [] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "vhost_blk", 00:04:55.260 "config": [] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "ublk", 00:04:55.260 "config": [] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "nbd", 00:04:55.260 "config": [] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "nvmf", 00:04:55.260 "config": [ 00:04:55.260 { 00:04:55.260 "method": "nvmf_set_config", 00:04:55.260 "params": { 00:04:55.260 "discovery_filter": "match_any", 00:04:55.260 "admin_cmd_passthru": { 00:04:55.260 "identify_ctrlr": false 00:04:55.260 }, 00:04:55.260 "dhchap_digests": [ 00:04:55.260 "sha256", 00:04:55.260 "sha384", 00:04:55.260 "sha512" 00:04:55.260 ], 00:04:55.260 "dhchap_dhgroups": [ 00:04:55.260 "null", 00:04:55.260 "ffdhe2048", 00:04:55.260 "ffdhe3072", 00:04:55.260 "ffdhe4096", 00:04:55.260 "ffdhe6144", 00:04:55.260 "ffdhe8192" 00:04:55.260 ] 00:04:55.260 } 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "method": "nvmf_set_max_subsystems", 00:04:55.260 "params": { 00:04:55.260 "max_subsystems": 1024 00:04:55.260 } 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "method": "nvmf_set_crdt", 00:04:55.260 "params": { 00:04:55.260 "crdt1": 0, 00:04:55.260 "crdt2": 0, 00:04:55.260 "crdt3": 0 00:04:55.260 } 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "method": "nvmf_create_transport", 00:04:55.260 "params": { 00:04:55.260 "trtype": "TCP", 00:04:55.260 "max_queue_depth": 128, 00:04:55.260 "max_io_qpairs_per_ctrlr": 127, 00:04:55.260 "in_capsule_data_size": 4096, 00:04:55.260 "max_io_size": 131072, 00:04:55.260 "io_unit_size": 131072, 00:04:55.260 "max_aq_depth": 128, 00:04:55.260 "num_shared_buffers": 511, 00:04:55.260 "buf_cache_size": 4294967295, 00:04:55.260 "dif_insert_or_strip": false, 00:04:55.260 "zcopy": false, 00:04:55.260 "c2h_success": true, 00:04:55.260 "sock_priority": 0, 00:04:55.260 "abort_timeout_sec": 1, 00:04:55.260 "ack_timeout": 0, 00:04:55.260 "data_wr_pool_size": 0 00:04:55.260 } 00:04:55.260 } 00:04:55.260 ] 00:04:55.260 }, 00:04:55.260 { 00:04:55.260 "subsystem": "iscsi", 00:04:55.260 "config": [ 00:04:55.260 { 00:04:55.260 "method": "iscsi_set_options", 00:04:55.260 "params": { 00:04:55.260 "node_base": "iqn.2016-06.io.spdk", 00:04:55.260 "max_sessions": 128, 00:04:55.260 "max_connections_per_session": 2, 00:04:55.260 "max_queue_depth": 64, 00:04:55.260 "default_time2wait": 2, 00:04:55.260 "default_time2retain": 20, 00:04:55.260 "first_burst_length": 8192, 00:04:55.260 "immediate_data": true, 00:04:55.260 "allow_duplicated_isid": false, 00:04:55.260 "error_recovery_level": 0, 00:04:55.260 "nop_timeout": 60, 00:04:55.260 "nop_in_interval": 30, 00:04:55.260 "disable_chap": false, 00:04:55.260 "require_chap": false, 00:04:55.260 "mutual_chap": false, 00:04:55.260 "chap_group": 0, 00:04:55.260 "max_large_datain_per_connection": 64, 00:04:55.260 "max_r2t_per_connection": 4, 00:04:55.260 "pdu_pool_size": 36864, 00:04:55.260 "immediate_data_pool_size": 16384, 00:04:55.260 "data_out_pool_size": 2048 00:04:55.260 } 00:04:55.260 } 00:04:55.260 ] 00:04:55.260 } 00:04:55.260 ] 00:04:55.260 } 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56923 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56923 ']' 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56923 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56923 00:04:55.260 killing process with pid 56923 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56923' 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56923 00:04:55.260 19:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56923 00:04:55.830 19:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56950 00:04:55.830 19:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.830 19:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56950 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56950 ']' 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56950 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56950 00:05:01.141 killing process with pid 56950 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56950' 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56950 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56950 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.141 ************************************ 00:05:01.141 END TEST skip_rpc_with_json 00:05:01.141 ************************************ 00:05:01.141 00:05:01.141 real 0m7.068s 00:05:01.141 user 0m6.790s 00:05:01.141 sys 0m0.648s 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.141 19:12:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:01.141 19:12:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.141 19:12:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.141 19:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.141 ************************************ 00:05:01.141 START TEST skip_rpc_with_delay 00:05:01.141 ************************************ 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.141 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.400 [2024-11-26 19:12:59.582226] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.400 00:05:01.400 real 0m0.091s 00:05:01.400 user 0m0.063s 00:05:01.400 sys 0m0.026s 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.400 ************************************ 00:05:01.400 END TEST skip_rpc_with_delay 00:05:01.400 ************************************ 00:05:01.400 19:12:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:01.400 19:12:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:01.400 19:12:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:01.400 19:12:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:01.400 19:12:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.400 19:12:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.400 19:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.400 ************************************ 00:05:01.400 START TEST exit_on_failed_rpc_init 00:05:01.400 ************************************ 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:01.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57060 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57060 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57060 ']' 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.400 19:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.400 [2024-11-26 19:12:59.730303] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:01.400 [2024-11-26 19:12:59.730403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:05:01.659 [2024-11-26 19:12:59.877513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.659 [2024-11-26 19:12:59.931519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.659 [2024-11-26 19:13:00.002608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.919 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.919 [2024-11-26 19:13:00.283173] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:01.919 [2024-11-26 19:13:00.283274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57070 ] 00:05:02.178 [2024-11-26 19:13:00.433742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.178 [2024-11-26 19:13:00.493097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.178 [2024-11-26 19:13:00.493224] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:02.178 [2024-11-26 19:13:00.493242] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:02.178 [2024-11-26 19:13:00.493253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57060 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57060 ']' 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57060 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57060 00:05:02.178 killing process with pid 57060 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57060' 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57060 00:05:02.178 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57060 00:05:02.747 00:05:02.747 real 0m1.311s 00:05:02.747 user 0m1.386s 00:05:02.747 sys 0m0.400s 00:05:02.747 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.747 19:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.747 ************************************ 00:05:02.747 END TEST exit_on_failed_rpc_init 00:05:02.747 ************************************ 00:05:02.747 19:13:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.747 00:05:02.747 real 0m14.359s 00:05:02.747 user 0m13.519s 00:05:02.747 sys 0m1.582s 00:05:02.747 19:13:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.747 ************************************ 00:05:02.747 END TEST skip_rpc 00:05:02.747 19:13:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.747 ************************************ 00:05:02.747 19:13:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.747 19:13:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.747 19:13:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.747 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.747 ************************************ 00:05:02.747 START TEST rpc_client 00:05:02.747 ************************************ 00:05:02.747 19:13:01 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.747 * Looking for test storage... 00:05:02.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.747 19:13:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.747 19:13:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.747 19:13:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.006 19:13:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.006 --rc genhtml_branch_coverage=1 00:05:03.006 --rc genhtml_function_coverage=1 00:05:03.006 --rc genhtml_legend=1 00:05:03.006 --rc geninfo_all_blocks=1 00:05:03.006 --rc geninfo_unexecuted_blocks=1 00:05:03.006 00:05:03.006 ' 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.006 --rc genhtml_branch_coverage=1 00:05:03.006 --rc genhtml_function_coverage=1 00:05:03.006 --rc genhtml_legend=1 00:05:03.006 --rc geninfo_all_blocks=1 00:05:03.006 --rc geninfo_unexecuted_blocks=1 00:05:03.006 00:05:03.006 ' 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.006 --rc genhtml_branch_coverage=1 00:05:03.006 --rc genhtml_function_coverage=1 00:05:03.006 --rc genhtml_legend=1 00:05:03.006 --rc geninfo_all_blocks=1 00:05:03.006 --rc geninfo_unexecuted_blocks=1 00:05:03.006 00:05:03.006 ' 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.006 --rc genhtml_branch_coverage=1 00:05:03.006 --rc genhtml_function_coverage=1 00:05:03.006 --rc genhtml_legend=1 00:05:03.006 --rc geninfo_all_blocks=1 00:05:03.006 --rc geninfo_unexecuted_blocks=1 00:05:03.006 00:05:03.006 ' 00:05:03.006 19:13:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.006 OK 00:05:03.006 19:13:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.006 00:05:03.006 real 0m0.204s 00:05:03.006 user 0m0.134s 00:05:03.006 sys 0m0.081s 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.006 19:13:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.006 ************************************ 00:05:03.006 END TEST rpc_client 00:05:03.006 ************************************ 00:05:03.006 19:13:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.006 19:13:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.006 19:13:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.006 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.006 ************************************ 00:05:03.006 START TEST json_config 00:05:03.006 ************************************ 00:05:03.006 19:13:01 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.006 19:13:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.006 19:13:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.006 19:13:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.264 19:13:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.264 19:13:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.264 19:13:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.264 19:13:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.264 19:13:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.264 19:13:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.264 19:13:01 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.264 19:13:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.264 19:13:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.264 19:13:01 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.264 19:13:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.264 19:13:01 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.264 19:13:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.264 19:13:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.264 19:13:01 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 19:13:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 19:13:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.264 19:13:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.264 19:13:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.264 19:13:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.264 19:13:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.265 19:13:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.265 19:13:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 19:13:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 19:13:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 19:13:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.265 19:13:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.265 19:13:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.265 INFO: JSON configuration test init 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 19:13:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:03.265 19:13:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.265 19:13:01 json_config -- json_config/common.sh@10 -- # shift 00:05:03.265 19:13:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.265 19:13:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.265 19:13:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.265 19:13:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.265 19:13:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.265 19:13:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57210 00:05:03.265 Waiting for target to run... 00:05:03.265 19:13:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.265 19:13:01 json_config -- json_config/common.sh@25 -- # waitforlisten 57210 /var/tmp/spdk_tgt.sock 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 57210 ']' 00:05:03.265 19:13:01 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.265 19:13:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 [2024-11-26 19:13:01.589985] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:03.265 [2024-11-26 19:13:01.590100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57210 ] 00:05:03.831 [2024-11-26 19:13:02.016128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.831 [2024-11-26 19:13:02.054132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:04.397 00:05:04.397 19:13:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.397 19:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.397 19:13:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:04.397 19:13:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.655 [2024-11-26 19:13:02.920433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.914 19:13:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.914 19:13:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:04.914 19:13:03 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:04.914 19:13:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.174 19:13:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.174 19:13:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.174 19:13:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.174 19:13:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.175 19:13:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.175 19:13:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.175 19:13:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:05.175 19:13:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:05.175 19:13:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.175 19:13:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.435 MallocForNvmf0 00:05:05.435 19:13:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.435 19:13:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.693 MallocForNvmf1 00:05:05.693 19:13:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.693 19:13:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.953 [2024-11-26 19:13:04.267191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.953 19:13:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.953 19:13:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.212 19:13:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.212 19:13:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.470 19:13:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.470 19:13:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.729 19:13:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.729 19:13:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.987 [2024-11-26 19:13:05.287702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.987 19:13:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.987 19:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.987 19:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.987 19:13:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.987 19:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.987 19:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.987 19:13:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.987 19:13:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.987 19:13:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.245 MallocBdevForConfigChangeCheck 00:05:07.504 19:13:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:07.504 19:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.504 19:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.504 19:13:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:07.504 19:13:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.762 INFO: shutting down applications... 00:05:07.762 19:13:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.762 19:13:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.763 19:13:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.763 19:13:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.763 19:13:06 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.022 Calling clear_iscsi_subsystem 00:05:08.022 Calling clear_nvmf_subsystem 00:05:08.022 Calling clear_nbd_subsystem 00:05:08.022 Calling clear_ublk_subsystem 00:05:08.022 Calling clear_vhost_blk_subsystem 00:05:08.022 Calling clear_vhost_scsi_subsystem 00:05:08.022 Calling clear_bdev_subsystem 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.022 19:13:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.590 19:13:06 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.590 19:13:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.590 19:13:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.590 19:13:06 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.590 19:13:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.590 19:13:06 json_config -- json_config/common.sh@35 -- # [[ -n 57210 ]] 00:05:08.590 19:13:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57210 00:05:08.590 19:13:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.590 19:13:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.590 19:13:06 json_config -- json_config/common.sh@41 -- # kill -0 57210 00:05:08.590 19:13:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.158 19:13:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.158 19:13:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.158 19:13:07 json_config -- json_config/common.sh@41 -- # kill -0 57210 00:05:09.158 19:13:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.158 19:13:07 json_config -- json_config/common.sh@43 -- # break 00:05:09.158 19:13:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.158 SPDK target shutdown done 00:05:09.158 19:13:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.158 INFO: relaunching applications... 00:05:09.158 19:13:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.158 19:13:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.158 19:13:07 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.158 19:13:07 json_config -- json_config/common.sh@10 -- # shift 00:05:09.158 19:13:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.158 19:13:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.158 19:13:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.158 19:13:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.158 19:13:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.158 19:13:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57405 00:05:09.158 Waiting for target to run... 00:05:09.158 19:13:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.158 19:13:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.158 19:13:07 json_config -- json_config/common.sh@25 -- # waitforlisten 57405 /var/tmp/spdk_tgt.sock 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 57405 ']' 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.158 19:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.158 [2024-11-26 19:13:07.451445] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:09.158 [2024-11-26 19:13:07.451547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57405 ] 00:05:09.725 [2024-11-26 19:13:07.898965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.725 [2024-11-26 19:13:07.933518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.725 [2024-11-26 19:13:08.072973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.984 [2024-11-26 19:13:08.293038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.984 [2024-11-26 19:13:08.325105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.984 19:13:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.984 00:05:09.984 19:13:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:09.984 19:13:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.984 19:13:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:09.984 INFO: Checking if target configuration is the same... 00:05:09.984 19:13:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.984 19:13:08 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.984 19:13:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:09.984 19:13:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.984 + '[' 2 -ne 2 ']' 00:05:09.984 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:09.984 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:09.984 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.984 +++ basename /dev/fd/62 00:05:09.984 ++ mktemp /tmp/62.XXX 00:05:09.984 + tmp_file_1=/tmp/62.SaQ 00:05:09.984 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.984 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.984 + tmp_file_2=/tmp/spdk_tgt_config.json.DBx 00:05:09.984 + ret=0 00:05:09.984 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.552 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.552 + diff -u /tmp/62.SaQ /tmp/spdk_tgt_config.json.DBx 00:05:10.552 INFO: JSON config files are the same 00:05:10.552 + echo 'INFO: JSON config files are the same' 00:05:10.552 + rm /tmp/62.SaQ /tmp/spdk_tgt_config.json.DBx 00:05:10.552 + exit 0 00:05:10.552 19:13:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:10.552 INFO: changing configuration and checking if this can be detected... 00:05:10.552 19:13:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.552 19:13:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.552 19:13:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.811 19:13:09 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.811 19:13:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:10.811 19:13:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.811 + '[' 2 -ne 2 ']' 00:05:10.811 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.811 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.811 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.811 +++ basename /dev/fd/62 00:05:10.811 ++ mktemp /tmp/62.XXX 00:05:10.811 + tmp_file_1=/tmp/62.xzV 00:05:10.811 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.811 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.811 + tmp_file_2=/tmp/spdk_tgt_config.json.0RI 00:05:10.811 + ret=0 00:05:10.811 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.378 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.378 + diff -u /tmp/62.xzV /tmp/spdk_tgt_config.json.0RI 00:05:11.378 + ret=1 00:05:11.378 + echo '=== Start of file: /tmp/62.xzV ===' 00:05:11.378 + cat /tmp/62.xzV 00:05:11.378 + echo '=== End of file: /tmp/62.xzV ===' 00:05:11.378 + echo '' 00:05:11.378 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0RI ===' 00:05:11.378 + cat /tmp/spdk_tgt_config.json.0RI 00:05:11.378 + echo '=== End of file: /tmp/spdk_tgt_config.json.0RI ===' 00:05:11.378 + echo '' 00:05:11.378 + rm /tmp/62.xzV /tmp/spdk_tgt_config.json.0RI 00:05:11.378 + exit 1 00:05:11.378 INFO: configuration change detected. 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 57405 ]] 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.378 19:13:09 json_config -- json_config/json_config.sh@330 -- # killprocess 57405 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 57405 ']' 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@958 -- # kill -0 57405 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@959 -- # uname 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57405 00:05:11.378 killing process with pid 57405 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57405' 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@973 -- # kill 57405 00:05:11.378 19:13:09 json_config -- common/autotest_common.sh@978 -- # wait 57405 00:05:11.637 19:13:09 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.637 19:13:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:11.637 19:13:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.637 19:13:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.637 INFO: Success 00:05:11.637 19:13:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:11.637 19:13:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:11.637 ************************************ 00:05:11.637 END TEST json_config 00:05:11.637 ************************************ 00:05:11.637 00:05:11.637 real 0m8.710s 00:05:11.637 user 0m12.493s 00:05:11.637 sys 0m1.756s 00:05:11.637 19:13:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.637 19:13:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.637 19:13:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.637 19:13:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.637 19:13:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.637 19:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.897 ************************************ 00:05:11.897 START TEST json_config_extra_key 00:05:11.897 ************************************ 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.897 --rc genhtml_branch_coverage=1 00:05:11.897 --rc genhtml_function_coverage=1 00:05:11.897 --rc genhtml_legend=1 00:05:11.897 --rc geninfo_all_blocks=1 00:05:11.897 --rc geninfo_unexecuted_blocks=1 00:05:11.897 00:05:11.897 ' 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.897 --rc genhtml_branch_coverage=1 00:05:11.897 --rc genhtml_function_coverage=1 00:05:11.897 --rc genhtml_legend=1 00:05:11.897 --rc geninfo_all_blocks=1 00:05:11.897 --rc geninfo_unexecuted_blocks=1 00:05:11.897 00:05:11.897 ' 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.897 --rc genhtml_branch_coverage=1 00:05:11.897 --rc genhtml_function_coverage=1 00:05:11.897 --rc genhtml_legend=1 00:05:11.897 --rc geninfo_all_blocks=1 00:05:11.897 --rc geninfo_unexecuted_blocks=1 00:05:11.897 00:05:11.897 ' 00:05:11.897 19:13:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.897 --rc genhtml_branch_coverage=1 00:05:11.897 --rc genhtml_function_coverage=1 00:05:11.897 --rc genhtml_legend=1 00:05:11.897 --rc geninfo_all_blocks=1 00:05:11.897 --rc geninfo_unexecuted_blocks=1 00:05:11.897 00:05:11.897 ' 00:05:11.897 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.897 19:13:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.897 19:13:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.898 19:13:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.898 19:13:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.898 19:13:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.898 19:13:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.898 19:13:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.898 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.898 19:13:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.898 INFO: launching applications... 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.898 19:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57558 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.898 Waiting for target to run... 00:05:11.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57558 /var/tmp/spdk_tgt.sock 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57558 ']' 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.898 19:13:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.898 19:13:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.158 [2024-11-26 19:13:10.341573] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:12.158 [2024-11-26 19:13:10.341676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57558 ] 00:05:12.416 [2024-11-26 19:13:10.793834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.416 [2024-11-26 19:13:10.835454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.674 [2024-11-26 19:13:10.866621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.243 00:05:13.243 INFO: shutting down applications... 00:05:13.243 19:13:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.243 19:13:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.243 19:13:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.243 19:13:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57558 ]] 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57558 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57558 00:05:13.243 19:13:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57558 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.578 19:13:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.578 SPDK target shutdown done 00:05:13.578 19:13:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.578 Success 00:05:13.578 00:05:13.578 real 0m1.816s 00:05:13.578 user 0m1.723s 00:05:13.578 sys 0m0.477s 00:05:13.578 19:13:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.578 19:13:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.578 ************************************ 00:05:13.578 END TEST json_config_extra_key 00:05:13.578 ************************************ 00:05:13.578 19:13:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.578 19:13:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.578 19:13:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.578 19:13:11 -- common/autotest_common.sh@10 -- # set +x 00:05:13.578 ************************************ 00:05:13.578 START TEST alias_rpc 00:05:13.578 ************************************ 00:05:13.578 19:13:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.837 * Looking for test storage... 00:05:13.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.837 19:13:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.837 --rc genhtml_branch_coverage=1 00:05:13.837 --rc genhtml_function_coverage=1 00:05:13.837 --rc genhtml_legend=1 00:05:13.837 --rc geninfo_all_blocks=1 00:05:13.837 --rc geninfo_unexecuted_blocks=1 00:05:13.837 00:05:13.837 ' 00:05:13.837 19:13:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.837 --rc genhtml_branch_coverage=1 00:05:13.837 --rc genhtml_function_coverage=1 00:05:13.837 --rc genhtml_legend=1 00:05:13.838 --rc geninfo_all_blocks=1 00:05:13.838 --rc geninfo_unexecuted_blocks=1 00:05:13.838 00:05:13.838 ' 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.838 --rc genhtml_branch_coverage=1 00:05:13.838 --rc genhtml_function_coverage=1 00:05:13.838 --rc genhtml_legend=1 00:05:13.838 --rc geninfo_all_blocks=1 00:05:13.838 --rc geninfo_unexecuted_blocks=1 00:05:13.838 00:05:13.838 ' 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.838 --rc genhtml_branch_coverage=1 00:05:13.838 --rc genhtml_function_coverage=1 00:05:13.838 --rc genhtml_legend=1 00:05:13.838 --rc geninfo_all_blocks=1 00:05:13.838 --rc geninfo_unexecuted_blocks=1 00:05:13.838 00:05:13.838 ' 00:05:13.838 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.838 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57632 00:05:13.838 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57632 00:05:13.838 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57632 ']' 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.838 19:13:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.838 [2024-11-26 19:13:12.199513] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:13.838 [2024-11-26 19:13:12.199828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57632 ] 00:05:14.097 [2024-11-26 19:13:12.348029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.097 [2024-11-26 19:13:12.396564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.097 [2024-11-26 19:13:12.463911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.356 19:13:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.356 19:13:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.356 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:14.615 19:13:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57632 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57632 ']' 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57632 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57632 00:05:14.615 killing process with pid 57632 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57632' 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 57632 00:05:14.615 19:13:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 57632 00:05:15.183 ************************************ 00:05:15.183 END TEST alias_rpc 00:05:15.183 ************************************ 00:05:15.183 00:05:15.183 real 0m1.376s 00:05:15.183 user 0m1.414s 00:05:15.183 sys 0m0.412s 00:05:15.183 19:13:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.183 19:13:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.183 19:13:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:15.183 19:13:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.183 19:13:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.183 19:13:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.183 19:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.183 ************************************ 00:05:15.183 START TEST spdkcli_tcp 00:05:15.183 ************************************ 00:05:15.183 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.183 * Looking for test storage... 00:05:15.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.183 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.183 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.183 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.183 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.184 19:13:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.184 --rc genhtml_branch_coverage=1 00:05:15.184 --rc genhtml_function_coverage=1 00:05:15.184 --rc genhtml_legend=1 00:05:15.184 --rc geninfo_all_blocks=1 00:05:15.184 --rc geninfo_unexecuted_blocks=1 00:05:15.184 00:05:15.184 ' 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.184 --rc genhtml_branch_coverage=1 00:05:15.184 --rc genhtml_function_coverage=1 00:05:15.184 --rc genhtml_legend=1 00:05:15.184 --rc geninfo_all_blocks=1 00:05:15.184 --rc geninfo_unexecuted_blocks=1 00:05:15.184 00:05:15.184 ' 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.184 --rc genhtml_branch_coverage=1 00:05:15.184 --rc genhtml_function_coverage=1 00:05:15.184 --rc genhtml_legend=1 00:05:15.184 --rc geninfo_all_blocks=1 00:05:15.184 --rc geninfo_unexecuted_blocks=1 00:05:15.184 00:05:15.184 ' 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.184 --rc genhtml_branch_coverage=1 00:05:15.184 --rc genhtml_function_coverage=1 00:05:15.184 --rc genhtml_legend=1 00:05:15.184 --rc geninfo_all_blocks=1 00:05:15.184 --rc geninfo_unexecuted_blocks=1 00:05:15.184 00:05:15.184 ' 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57708 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57708 00:05:15.184 19:13:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57708 ']' 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.184 19:13:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.442 [2024-11-26 19:13:13.639009] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:15.442 [2024-11-26 19:13:13.639304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57708 ] 00:05:15.442 [2024-11-26 19:13:13.786113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.442 [2024-11-26 19:13:13.830700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.442 [2024-11-26 19:13:13.830708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.700 [2024-11-26 19:13:13.899621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.700 19:13:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.700 19:13:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:15.700 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57718 00:05:15.700 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:15.700 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:15.958 [ 00:05:15.958 "bdev_malloc_delete", 00:05:15.958 "bdev_malloc_create", 00:05:15.958 "bdev_null_resize", 00:05:15.958 "bdev_null_delete", 00:05:15.958 "bdev_null_create", 00:05:15.958 "bdev_nvme_cuse_unregister", 00:05:15.958 "bdev_nvme_cuse_register", 00:05:15.958 "bdev_opal_new_user", 00:05:15.958 "bdev_opal_set_lock_state", 00:05:15.958 "bdev_opal_delete", 00:05:15.958 "bdev_opal_get_info", 00:05:15.958 "bdev_opal_create", 00:05:15.958 "bdev_nvme_opal_revert", 00:05:15.958 "bdev_nvme_opal_init", 00:05:15.958 "bdev_nvme_send_cmd", 00:05:15.958 "bdev_nvme_set_keys", 00:05:15.958 "bdev_nvme_get_path_iostat", 00:05:15.959 "bdev_nvme_get_mdns_discovery_info", 00:05:15.959 "bdev_nvme_stop_mdns_discovery", 00:05:15.959 "bdev_nvme_start_mdns_discovery", 00:05:15.959 "bdev_nvme_set_multipath_policy", 00:05:15.959 "bdev_nvme_set_preferred_path", 00:05:15.959 "bdev_nvme_get_io_paths", 00:05:15.959 "bdev_nvme_remove_error_injection", 00:05:15.959 "bdev_nvme_add_error_injection", 00:05:15.959 "bdev_nvme_get_discovery_info", 00:05:15.959 "bdev_nvme_stop_discovery", 00:05:15.959 "bdev_nvme_start_discovery", 00:05:15.959 "bdev_nvme_get_controller_health_info", 00:05:15.959 "bdev_nvme_disable_controller", 00:05:15.959 "bdev_nvme_enable_controller", 00:05:15.959 "bdev_nvme_reset_controller", 00:05:15.959 "bdev_nvme_get_transport_statistics", 00:05:15.959 "bdev_nvme_apply_firmware", 00:05:15.959 "bdev_nvme_detach_controller", 00:05:15.959 "bdev_nvme_get_controllers", 00:05:15.959 "bdev_nvme_attach_controller", 00:05:15.959 "bdev_nvme_set_hotplug", 00:05:15.959 "bdev_nvme_set_options", 00:05:15.959 "bdev_passthru_delete", 00:05:15.959 "bdev_passthru_create", 00:05:15.959 "bdev_lvol_set_parent_bdev", 00:05:15.959 "bdev_lvol_set_parent", 00:05:15.959 "bdev_lvol_check_shallow_copy", 00:05:15.959 "bdev_lvol_start_shallow_copy", 00:05:15.959 "bdev_lvol_grow_lvstore", 00:05:15.959 "bdev_lvol_get_lvols", 00:05:15.959 "bdev_lvol_get_lvstores", 00:05:15.959 "bdev_lvol_delete", 00:05:15.959 "bdev_lvol_set_read_only", 00:05:15.959 "bdev_lvol_resize", 00:05:15.959 "bdev_lvol_decouple_parent", 00:05:15.959 "bdev_lvol_inflate", 00:05:15.959 "bdev_lvol_rename", 00:05:15.959 "bdev_lvol_clone_bdev", 00:05:15.959 "bdev_lvol_clone", 00:05:15.959 "bdev_lvol_snapshot", 00:05:15.959 "bdev_lvol_create", 00:05:15.959 "bdev_lvol_delete_lvstore", 00:05:15.959 "bdev_lvol_rename_lvstore", 00:05:15.959 "bdev_lvol_create_lvstore", 00:05:15.959 "bdev_raid_set_options", 00:05:15.959 "bdev_raid_remove_base_bdev", 00:05:15.959 "bdev_raid_add_base_bdev", 00:05:15.959 "bdev_raid_delete", 00:05:15.959 "bdev_raid_create", 00:05:15.959 "bdev_raid_get_bdevs", 00:05:15.959 "bdev_error_inject_error", 00:05:15.959 "bdev_error_delete", 00:05:15.959 "bdev_error_create", 00:05:15.959 "bdev_split_delete", 00:05:15.959 "bdev_split_create", 00:05:15.959 "bdev_delay_delete", 00:05:15.959 "bdev_delay_create", 00:05:15.959 "bdev_delay_update_latency", 00:05:15.959 "bdev_zone_block_delete", 00:05:15.959 "bdev_zone_block_create", 00:05:15.959 "blobfs_create", 00:05:15.959 "blobfs_detect", 00:05:15.959 "blobfs_set_cache_size", 00:05:15.959 "bdev_aio_delete", 00:05:15.959 "bdev_aio_rescan", 00:05:15.959 "bdev_aio_create", 00:05:15.959 "bdev_ftl_set_property", 00:05:15.959 "bdev_ftl_get_properties", 00:05:15.959 "bdev_ftl_get_stats", 00:05:15.959 "bdev_ftl_unmap", 00:05:15.959 "bdev_ftl_unload", 00:05:15.959 "bdev_ftl_delete", 00:05:15.959 "bdev_ftl_load", 00:05:15.959 "bdev_ftl_create", 00:05:15.959 "bdev_virtio_attach_controller", 00:05:15.959 "bdev_virtio_scsi_get_devices", 00:05:15.959 "bdev_virtio_detach_controller", 00:05:15.959 "bdev_virtio_blk_set_hotplug", 00:05:15.959 "bdev_iscsi_delete", 00:05:15.959 "bdev_iscsi_create", 00:05:15.959 "bdev_iscsi_set_options", 00:05:15.959 "bdev_uring_delete", 00:05:15.959 "bdev_uring_rescan", 00:05:15.959 "bdev_uring_create", 00:05:15.959 "accel_error_inject_error", 00:05:15.959 "ioat_scan_accel_module", 00:05:15.959 "dsa_scan_accel_module", 00:05:15.959 "iaa_scan_accel_module", 00:05:15.959 "keyring_file_remove_key", 00:05:15.959 "keyring_file_add_key", 00:05:15.959 "keyring_linux_set_options", 00:05:15.959 "fsdev_aio_delete", 00:05:15.959 "fsdev_aio_create", 00:05:15.959 "iscsi_get_histogram", 00:05:15.959 "iscsi_enable_histogram", 00:05:15.959 "iscsi_set_options", 00:05:15.959 "iscsi_get_auth_groups", 00:05:15.959 "iscsi_auth_group_remove_secret", 00:05:15.959 "iscsi_auth_group_add_secret", 00:05:15.959 "iscsi_delete_auth_group", 00:05:15.959 "iscsi_create_auth_group", 00:05:15.959 "iscsi_set_discovery_auth", 00:05:15.959 "iscsi_get_options", 00:05:15.959 "iscsi_target_node_request_logout", 00:05:15.959 "iscsi_target_node_set_redirect", 00:05:15.959 "iscsi_target_node_set_auth", 00:05:15.959 "iscsi_target_node_add_lun", 00:05:15.959 "iscsi_get_stats", 00:05:15.959 "iscsi_get_connections", 00:05:15.959 "iscsi_portal_group_set_auth", 00:05:15.959 "iscsi_start_portal_group", 00:05:15.959 "iscsi_delete_portal_group", 00:05:15.959 "iscsi_create_portal_group", 00:05:15.959 "iscsi_get_portal_groups", 00:05:15.959 "iscsi_delete_target_node", 00:05:15.959 "iscsi_target_node_remove_pg_ig_maps", 00:05:15.959 "iscsi_target_node_add_pg_ig_maps", 00:05:15.959 "iscsi_create_target_node", 00:05:15.959 "iscsi_get_target_nodes", 00:05:15.959 "iscsi_delete_initiator_group", 00:05:15.959 "iscsi_initiator_group_remove_initiators", 00:05:15.959 "iscsi_initiator_group_add_initiators", 00:05:15.959 "iscsi_create_initiator_group", 00:05:15.959 "iscsi_get_initiator_groups", 00:05:15.959 "nvmf_set_crdt", 00:05:15.959 "nvmf_set_config", 00:05:15.959 "nvmf_set_max_subsystems", 00:05:15.959 "nvmf_stop_mdns_prr", 00:05:15.959 "nvmf_publish_mdns_prr", 00:05:15.959 "nvmf_subsystem_get_listeners", 00:05:15.959 "nvmf_subsystem_get_qpairs", 00:05:15.959 "nvmf_subsystem_get_controllers", 00:05:15.959 "nvmf_get_stats", 00:05:15.959 "nvmf_get_transports", 00:05:15.959 "nvmf_create_transport", 00:05:15.959 "nvmf_get_targets", 00:05:15.959 "nvmf_delete_target", 00:05:15.959 "nvmf_create_target", 00:05:15.959 "nvmf_subsystem_allow_any_host", 00:05:15.959 "nvmf_subsystem_set_keys", 00:05:15.959 "nvmf_subsystem_remove_host", 00:05:15.959 "nvmf_subsystem_add_host", 00:05:15.959 "nvmf_ns_remove_host", 00:05:15.959 "nvmf_ns_add_host", 00:05:15.959 "nvmf_subsystem_remove_ns", 00:05:15.959 "nvmf_subsystem_set_ns_ana_group", 00:05:15.959 "nvmf_subsystem_add_ns", 00:05:15.959 "nvmf_subsystem_listener_set_ana_state", 00:05:15.959 "nvmf_discovery_get_referrals", 00:05:15.959 "nvmf_discovery_remove_referral", 00:05:15.959 "nvmf_discovery_add_referral", 00:05:15.959 "nvmf_subsystem_remove_listener", 00:05:15.959 "nvmf_subsystem_add_listener", 00:05:15.959 "nvmf_delete_subsystem", 00:05:15.959 "nvmf_create_subsystem", 00:05:15.959 "nvmf_get_subsystems", 00:05:15.959 "env_dpdk_get_mem_stats", 00:05:15.959 "nbd_get_disks", 00:05:15.959 "nbd_stop_disk", 00:05:15.959 "nbd_start_disk", 00:05:15.959 "ublk_recover_disk", 00:05:15.959 "ublk_get_disks", 00:05:15.959 "ublk_stop_disk", 00:05:15.959 "ublk_start_disk", 00:05:15.959 "ublk_destroy_target", 00:05:15.959 "ublk_create_target", 00:05:15.959 "virtio_blk_create_transport", 00:05:15.959 "virtio_blk_get_transports", 00:05:15.959 "vhost_controller_set_coalescing", 00:05:15.959 "vhost_get_controllers", 00:05:15.959 "vhost_delete_controller", 00:05:15.959 "vhost_create_blk_controller", 00:05:15.959 "vhost_scsi_controller_remove_target", 00:05:15.959 "vhost_scsi_controller_add_target", 00:05:15.959 "vhost_start_scsi_controller", 00:05:15.959 "vhost_create_scsi_controller", 00:05:15.959 "thread_set_cpumask", 00:05:15.959 "scheduler_set_options", 00:05:15.959 "framework_get_governor", 00:05:15.959 "framework_get_scheduler", 00:05:15.959 "framework_set_scheduler", 00:05:15.959 "framework_get_reactors", 00:05:15.959 "thread_get_io_channels", 00:05:15.959 "thread_get_pollers", 00:05:15.959 "thread_get_stats", 00:05:15.959 "framework_monitor_context_switch", 00:05:15.960 "spdk_kill_instance", 00:05:15.960 "log_enable_timestamps", 00:05:15.960 "log_get_flags", 00:05:15.960 "log_clear_flag", 00:05:15.960 "log_set_flag", 00:05:15.960 "log_get_level", 00:05:15.960 "log_set_level", 00:05:15.960 "log_get_print_level", 00:05:15.960 "log_set_print_level", 00:05:15.960 "framework_enable_cpumask_locks", 00:05:15.960 "framework_disable_cpumask_locks", 00:05:15.960 "framework_wait_init", 00:05:15.960 "framework_start_init", 00:05:15.960 "scsi_get_devices", 00:05:15.960 "bdev_get_histogram", 00:05:15.960 "bdev_enable_histogram", 00:05:15.960 "bdev_set_qos_limit", 00:05:15.960 "bdev_set_qd_sampling_period", 00:05:15.960 "bdev_get_bdevs", 00:05:15.960 "bdev_reset_iostat", 00:05:15.960 "bdev_get_iostat", 00:05:15.960 "bdev_examine", 00:05:15.960 "bdev_wait_for_examine", 00:05:15.960 "bdev_set_options", 00:05:15.960 "accel_get_stats", 00:05:15.960 "accel_set_options", 00:05:15.960 "accel_set_driver", 00:05:15.960 "accel_crypto_key_destroy", 00:05:15.960 "accel_crypto_keys_get", 00:05:15.960 "accel_crypto_key_create", 00:05:15.960 "accel_assign_opc", 00:05:15.960 "accel_get_module_info", 00:05:15.960 "accel_get_opc_assignments", 00:05:15.960 "vmd_rescan", 00:05:15.960 "vmd_remove_device", 00:05:15.960 "vmd_enable", 00:05:15.960 "sock_get_default_impl", 00:05:15.960 "sock_set_default_impl", 00:05:15.960 "sock_impl_set_options", 00:05:15.960 "sock_impl_get_options", 00:05:15.960 "iobuf_get_stats", 00:05:15.960 "iobuf_set_options", 00:05:15.960 "keyring_get_keys", 00:05:15.960 "framework_get_pci_devices", 00:05:15.960 "framework_get_config", 00:05:15.960 "framework_get_subsystems", 00:05:15.960 "fsdev_set_opts", 00:05:15.960 "fsdev_get_opts", 00:05:15.960 "trace_get_info", 00:05:15.960 "trace_get_tpoint_group_mask", 00:05:15.960 "trace_disable_tpoint_group", 00:05:15.960 "trace_enable_tpoint_group", 00:05:15.960 "trace_clear_tpoint_mask", 00:05:15.960 "trace_set_tpoint_mask", 00:05:15.960 "notify_get_notifications", 00:05:15.960 "notify_get_types", 00:05:15.960 "spdk_get_version", 00:05:15.960 "rpc_get_methods" 00:05:15.960 ] 00:05:15.960 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:15.960 19:13:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.960 19:13:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.218 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.218 19:13:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57708 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57708 ']' 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57708 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57708 00:05:16.218 killing process with pid 57708 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57708' 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57708 00:05:16.218 19:13:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57708 00:05:16.477 ************************************ 00:05:16.477 END TEST spdkcli_tcp 00:05:16.477 ************************************ 00:05:16.477 00:05:16.477 real 0m1.454s 00:05:16.477 user 0m2.472s 00:05:16.477 sys 0m0.468s 00:05:16.477 19:13:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.477 19:13:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 19:13:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.477 19:13:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.477 19:13:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.477 19:13:14 -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 ************************************ 00:05:16.477 START TEST dpdk_mem_utility 00:05:16.477 ************************************ 00:05:16.477 19:13:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.736 * Looking for test storage... 00:05:16.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.736 19:13:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.736 19:13:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.736 19:13:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.736 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.736 19:13:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:16.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.737 19:13:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.737 --rc genhtml_branch_coverage=1 00:05:16.737 --rc genhtml_function_coverage=1 00:05:16.737 --rc genhtml_legend=1 00:05:16.737 --rc geninfo_all_blocks=1 00:05:16.737 --rc geninfo_unexecuted_blocks=1 00:05:16.737 00:05:16.737 ' 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.737 --rc genhtml_branch_coverage=1 00:05:16.737 --rc genhtml_function_coverage=1 00:05:16.737 --rc genhtml_legend=1 00:05:16.737 --rc geninfo_all_blocks=1 00:05:16.737 --rc geninfo_unexecuted_blocks=1 00:05:16.737 00:05:16.737 ' 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.737 --rc genhtml_branch_coverage=1 00:05:16.737 --rc genhtml_function_coverage=1 00:05:16.737 --rc genhtml_legend=1 00:05:16.737 --rc geninfo_all_blocks=1 00:05:16.737 --rc geninfo_unexecuted_blocks=1 00:05:16.737 00:05:16.737 ' 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.737 --rc genhtml_branch_coverage=1 00:05:16.737 --rc genhtml_function_coverage=1 00:05:16.737 --rc genhtml_legend=1 00:05:16.737 --rc geninfo_all_blocks=1 00:05:16.737 --rc geninfo_unexecuted_blocks=1 00:05:16.737 00:05:16.737 ' 00:05:16.737 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.737 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57800 00:05:16.737 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57800 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57800 ']' 00:05:16.737 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.737 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 [2024-11-26 19:13:15.127197] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:16.737 [2024-11-26 19:13:15.127534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57800 ] 00:05:16.996 [2024-11-26 19:13:15.274128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.996 [2024-11-26 19:13:15.317502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.996 [2024-11-26 19:13:15.387526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.255 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.255 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:17.255 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.255 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.255 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.255 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.255 { 00:05:17.255 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.255 } 00:05:17.255 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.255 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.255 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:17.255 1 heaps totaling size 810.000000 MiB 00:05:17.255 size: 810.000000 MiB heap id: 0 00:05:17.255 end heaps---------- 00:05:17.255 9 mempools totaling size 595.772034 MiB 00:05:17.255 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.255 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.255 size: 92.545471 MiB name: bdev_io_57800 00:05:17.255 size: 50.003479 MiB name: msgpool_57800 00:05:17.255 size: 36.509338 MiB name: fsdev_io_57800 00:05:17.255 size: 21.763794 MiB name: PDU_Pool 00:05:17.255 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.255 size: 4.133484 MiB name: evtpool_57800 00:05:17.255 size: 0.026123 MiB name: Session_Pool 00:05:17.255 end mempools------- 00:05:17.255 6 memzones totaling size 4.142822 MiB 00:05:17.255 size: 1.000366 MiB name: RG_ring_0_57800 00:05:17.255 size: 1.000366 MiB name: RG_ring_1_57800 00:05:17.255 size: 1.000366 MiB name: RG_ring_4_57800 00:05:17.255 size: 1.000366 MiB name: RG_ring_5_57800 00:05:17.255 size: 0.125366 MiB name: RG_ring_2_57800 00:05:17.255 size: 0.015991 MiB name: RG_ring_3_57800 00:05:17.255 end memzones------- 00:05:17.255 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.516 heap id: 0 total size: 810.000000 MiB number of busy elements: 314 number of free elements: 15 00:05:17.516 list of free elements. size: 10.813049 MiB 00:05:17.516 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:17.516 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:17.516 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:17.516 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:17.516 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:17.516 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:17.516 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:17.516 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:17.516 element at address: 0x20001a600000 with size: 0.567505 MiB 00:05:17.516 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:17.516 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:17.516 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:17.516 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:17.516 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:17.516 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:17.516 list of standard malloc elements. size: 199.268066 MiB 00:05:17.516 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:17.516 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:17.516 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:17.516 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:17.516 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:17.516 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.516 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:17.516 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.516 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:17.516 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:17.516 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:17.516 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:17.517 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:17.517 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:17.517 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:17.518 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:17.518 list of memzone associated elements. size: 599.918884 MiB 00:05:17.518 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:17.518 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.518 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:17.518 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.518 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:17.518 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57800_0 00:05:17.518 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:17.518 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57800_0 00:05:17.518 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:17.518 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57800_0 00:05:17.518 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:17.518 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.518 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:17.518 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.518 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:17.518 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57800_0 00:05:17.518 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:17.518 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57800 00:05:17.518 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.518 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57800 00:05:17.518 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:17.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.518 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:17.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.518 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:17.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.518 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:17.518 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.518 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:17.518 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57800 00:05:17.518 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:17.518 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57800 00:05:17.518 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:17.518 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57800 00:05:17.518 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:17.518 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57800 00:05:17.518 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:17.518 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57800 00:05:17.518 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:17.518 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57800 00:05:17.518 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:17.518 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.518 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:17.518 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.518 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:17.518 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.518 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:17.518 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57800 00:05:17.518 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:17.518 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57800 00:05:17.518 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:17.518 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.518 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:17.518 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.518 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:17.518 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57800 00:05:17.518 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:17.518 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.518 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:17.518 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57800 00:05:17.518 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:17.518 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57800 00:05:17.518 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:17.518 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57800 00:05:17.518 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:17.518 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.518 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.518 19:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57800 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57800 ']' 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57800 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57800 00:05:17.518 killing process with pid 57800 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57800' 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57800 00:05:17.518 19:13:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57800 00:05:17.777 ************************************ 00:05:17.777 END TEST dpdk_mem_utility 00:05:17.777 ************************************ 00:05:17.777 00:05:17.777 real 0m1.257s 00:05:17.777 user 0m1.214s 00:05:17.777 sys 0m0.394s 00:05:17.777 19:13:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.777 19:13:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.777 19:13:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.777 19:13:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.777 19:13:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.777 19:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.777 ************************************ 00:05:17.777 START TEST event 00:05:17.777 ************************************ 00:05:17.777 19:13:16 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.036 * Looking for test storage... 00:05:18.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.036 19:13:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.036 19:13:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.036 19:13:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.036 19:13:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.036 19:13:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.036 19:13:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.036 19:13:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.036 19:13:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.036 19:13:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.036 19:13:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.036 19:13:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.036 19:13:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.036 19:13:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.036 19:13:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.036 19:13:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.036 19:13:16 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.036 19:13:16 event -- scripts/common.sh@345 -- # : 1 00:05:18.036 19:13:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.036 19:13:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.036 19:13:16 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.036 19:13:16 event -- scripts/common.sh@353 -- # local d=1 00:05:18.036 19:13:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.036 19:13:16 event -- scripts/common.sh@355 -- # echo 1 00:05:18.036 19:13:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.036 19:13:16 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.036 19:13:16 event -- scripts/common.sh@353 -- # local d=2 00:05:18.036 19:13:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.036 19:13:16 event -- scripts/common.sh@355 -- # echo 2 00:05:18.036 19:13:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.036 19:13:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.036 19:13:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.037 19:13:16 event -- scripts/common.sh@368 -- # return 0 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.037 --rc genhtml_branch_coverage=1 00:05:18.037 --rc genhtml_function_coverage=1 00:05:18.037 --rc genhtml_legend=1 00:05:18.037 --rc geninfo_all_blocks=1 00:05:18.037 --rc geninfo_unexecuted_blocks=1 00:05:18.037 00:05:18.037 ' 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.037 --rc genhtml_branch_coverage=1 00:05:18.037 --rc genhtml_function_coverage=1 00:05:18.037 --rc genhtml_legend=1 00:05:18.037 --rc geninfo_all_blocks=1 00:05:18.037 --rc geninfo_unexecuted_blocks=1 00:05:18.037 00:05:18.037 ' 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.037 --rc genhtml_branch_coverage=1 00:05:18.037 --rc genhtml_function_coverage=1 00:05:18.037 --rc genhtml_legend=1 00:05:18.037 --rc geninfo_all_blocks=1 00:05:18.037 --rc geninfo_unexecuted_blocks=1 00:05:18.037 00:05:18.037 ' 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.037 --rc genhtml_branch_coverage=1 00:05:18.037 --rc genhtml_function_coverage=1 00:05:18.037 --rc genhtml_legend=1 00:05:18.037 --rc geninfo_all_blocks=1 00:05:18.037 --rc geninfo_unexecuted_blocks=1 00:05:18.037 00:05:18.037 ' 00:05:18.037 19:13:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.037 19:13:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.037 19:13:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:18.037 19:13:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.037 19:13:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.037 ************************************ 00:05:18.037 START TEST event_perf 00:05:18.037 ************************************ 00:05:18.037 19:13:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.037 Running I/O for 1 seconds...[2024-11-26 19:13:16.403477] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:18.037 [2024-11-26 19:13:16.403703] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57872 ] 00:05:18.295 [2024-11-26 19:13:16.547134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.295 [2024-11-26 19:13:16.599785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.295 [2024-11-26 19:13:16.599915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.295 [2024-11-26 19:13:16.600059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.295 [2024-11-26 19:13:16.600061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.229 Running I/O for 1 seconds... 00:05:19.229 lcore 0: 208637 00:05:19.229 lcore 1: 208637 00:05:19.229 lcore 2: 208638 00:05:19.229 lcore 3: 208638 00:05:19.229 done. 00:05:19.229 ************************************ 00:05:19.229 END TEST event_perf 00:05:19.229 ************************************ 00:05:19.229 00:05:19.229 real 0m1.265s 00:05:19.229 user 0m4.096s 00:05:19.229 sys 0m0.050s 00:05:19.229 19:13:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.229 19:13:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.487 19:13:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.487 19:13:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:19.487 19:13:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.487 19:13:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.487 ************************************ 00:05:19.487 START TEST event_reactor 00:05:19.487 ************************************ 00:05:19.487 19:13:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.488 [2024-11-26 19:13:17.718081] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:19.488 [2024-11-26 19:13:17.718156] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57911 ] 00:05:19.488 [2024-11-26 19:13:17.861776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.488 [2024-11-26 19:13:17.903233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.889 test_start 00:05:20.889 oneshot 00:05:20.889 tick 100 00:05:20.889 tick 100 00:05:20.889 tick 250 00:05:20.889 tick 100 00:05:20.889 tick 100 00:05:20.889 tick 100 00:05:20.889 tick 250 00:05:20.889 tick 500 00:05:20.889 tick 100 00:05:20.889 tick 100 00:05:20.889 tick 250 00:05:20.889 tick 100 00:05:20.889 tick 100 00:05:20.889 test_end 00:05:20.889 00:05:20.889 real 0m1.245s 00:05:20.889 user 0m1.099s 00:05:20.889 sys 0m0.040s 00:05:20.889 ************************************ 00:05:20.889 END TEST event_reactor 00:05:20.889 ************************************ 00:05:20.889 19:13:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.889 19:13:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.889 19:13:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.889 19:13:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.889 19:13:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.889 19:13:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.889 ************************************ 00:05:20.889 START TEST event_reactor_perf 00:05:20.889 ************************************ 00:05:20.889 19:13:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.889 [2024-11-26 19:13:19.018862] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:20.889 [2024-11-26 19:13:19.019154] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:05:20.889 [2024-11-26 19:13:19.164959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.889 [2024-11-26 19:13:19.206617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.825 test_start 00:05:21.825 test_end 00:05:21.825 Performance: 433336 events per second 00:05:21.825 00:05:21.825 real 0m1.248s 00:05:21.825 user 0m1.106s 00:05:21.825 sys 0m0.036s 00:05:21.825 19:13:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.825 19:13:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.825 ************************************ 00:05:21.825 END TEST event_reactor_perf 00:05:21.825 ************************************ 00:05:22.085 19:13:20 event -- event/event.sh@49 -- # uname -s 00:05:22.085 19:13:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.085 19:13:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.085 19:13:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.085 19:13:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.085 19:13:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.085 ************************************ 00:05:22.085 START TEST event_scheduler 00:05:22.085 ************************************ 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.085 * Looking for test storage... 00:05:22.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.085 19:13:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.085 --rc genhtml_branch_coverage=1 00:05:22.085 --rc genhtml_function_coverage=1 00:05:22.085 --rc genhtml_legend=1 00:05:22.085 --rc geninfo_all_blocks=1 00:05:22.085 --rc geninfo_unexecuted_blocks=1 00:05:22.085 00:05:22.085 ' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.085 --rc genhtml_branch_coverage=1 00:05:22.085 --rc genhtml_function_coverage=1 00:05:22.085 --rc genhtml_legend=1 00:05:22.085 --rc geninfo_all_blocks=1 00:05:22.085 --rc geninfo_unexecuted_blocks=1 00:05:22.085 00:05:22.085 ' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.085 --rc genhtml_branch_coverage=1 00:05:22.085 --rc genhtml_function_coverage=1 00:05:22.085 --rc genhtml_legend=1 00:05:22.085 --rc geninfo_all_blocks=1 00:05:22.085 --rc geninfo_unexecuted_blocks=1 00:05:22.085 00:05:22.085 ' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.085 --rc genhtml_branch_coverage=1 00:05:22.085 --rc genhtml_function_coverage=1 00:05:22.085 --rc genhtml_legend=1 00:05:22.085 --rc geninfo_all_blocks=1 00:05:22.085 --rc geninfo_unexecuted_blocks=1 00:05:22.085 00:05:22.085 ' 00:05:22.085 19:13:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.085 19:13:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58011 00:05:22.085 19:13:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.085 19:13:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58011 00:05:22.085 19:13:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58011 ']' 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.085 19:13:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.344 [2024-11-26 19:13:20.550348] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:22.344 [2024-11-26 19:13:20.550644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58011 ] 00:05:22.344 [2024-11-26 19:13:20.704223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.345 [2024-11-26 19:13:20.761818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.345 [2024-11-26 19:13:20.761975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.345 [2024-11-26 19:13:20.762100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.345 [2024-11-26 19:13:20.762108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:23.282 19:13:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.282 POWER: Cannot set governor of lcore 0 to performance 00:05:23.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.282 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.282 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.282 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:23.282 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:23.282 POWER: Unable to set Power Management Environment for lcore 0 00:05:23.282 [2024-11-26 19:13:21.564783] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:23.282 [2024-11-26 19:13:21.564796] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:23.282 [2024-11-26 19:13:21.564805] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.282 [2024-11-26 19:13:21.564816] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.282 [2024-11-26 19:13:21.564824] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.282 [2024-11-26 19:13:21.564830] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 [2024-11-26 19:13:21.624063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.282 [2024-11-26 19:13:21.655353] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 ************************************ 00:05:23.282 START TEST scheduler_create_thread 00:05:23.282 ************************************ 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 2 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 3 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 4 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 5 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 6 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.282 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.541 7 00:05:23.541 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.541 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.541 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.541 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.541 8 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.542 9 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.542 10 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.542 19:13:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.481 ************************************ 00:05:24.481 END TEST scheduler_create_thread 00:05:24.481 ************************************ 00:05:24.481 19:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.481 00:05:24.481 real 0m1.168s 00:05:24.481 user 0m0.017s 00:05:24.481 sys 0m0.006s 00:05:24.481 19:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.481 19:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.481 19:13:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.481 19:13:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58011 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58011 ']' 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58011 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58011 00:05:24.481 killing process with pid 58011 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58011' 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58011 00:05:24.481 19:13:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58011 00:05:25.047 [2024-11-26 19:13:23.317518] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.307 ************************************ 00:05:25.307 END TEST event_scheduler 00:05:25.307 ************************************ 00:05:25.307 00:05:25.307 real 0m3.205s 00:05:25.307 user 0m5.951s 00:05:25.307 sys 0m0.375s 00:05:25.307 19:13:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.307 19:13:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.307 19:13:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.307 19:13:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.307 19:13:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.307 19:13:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.307 19:13:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.307 ************************************ 00:05:25.307 START TEST app_repeat 00:05:25.307 ************************************ 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.307 Process app_repeat pid: 58094 00:05:25.307 spdk_app_start Round 0 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58094 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58094' 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.307 19:13:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58094 /var/tmp/spdk-nbd.sock 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.307 19:13:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.307 [2024-11-26 19:13:23.601098] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:25.307 [2024-11-26 19:13:23.601181] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58094 ] 00:05:25.566 [2024-11-26 19:13:23.748918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.566 [2024-11-26 19:13:23.828032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.566 [2024-11-26 19:13:23.828040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.566 [2024-11-26 19:13:23.892666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.566 19:13:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.566 19:13:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.566 19:13:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.826 Malloc0 00:05:26.084 19:13:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.344 Malloc1 00:05:26.344 19:13:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.344 19:13:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.603 /dev/nbd0 00:05:26.603 19:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.603 19:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.603 1+0 records in 00:05:26.603 1+0 records out 00:05:26.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452797 s, 9.0 MB/s 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.603 19:13:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.603 19:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.603 19:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.603 19:13:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.862 /dev/nbd1 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.862 1+0 records in 00:05:26.862 1+0 records out 00:05:26.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259088 s, 15.8 MB/s 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.862 19:13:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.862 19:13:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.121 { 00:05:27.121 "nbd_device": "/dev/nbd0", 00:05:27.121 "bdev_name": "Malloc0" 00:05:27.121 }, 00:05:27.121 { 00:05:27.121 "nbd_device": "/dev/nbd1", 00:05:27.121 "bdev_name": "Malloc1" 00:05:27.121 } 00:05:27.121 ]' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.121 { 00:05:27.121 "nbd_device": "/dev/nbd0", 00:05:27.121 "bdev_name": "Malloc0" 00:05:27.121 }, 00:05:27.121 { 00:05:27.121 "nbd_device": "/dev/nbd1", 00:05:27.121 "bdev_name": "Malloc1" 00:05:27.121 } 00:05:27.121 ]' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.121 /dev/nbd1' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.121 /dev/nbd1' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.121 256+0 records in 00:05:27.121 256+0 records out 00:05:27.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488483 s, 215 MB/s 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.121 256+0 records in 00:05:27.121 256+0 records out 00:05:27.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022571 s, 46.5 MB/s 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.121 19:13:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.379 256+0 records in 00:05:27.379 256+0 records out 00:05:27.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260048 s, 40.3 MB/s 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.379 19:13:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.637 19:13:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.895 19:13:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.896 19:13:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.896 19:13:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.896 19:13:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.154 19:13:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.154 19:13:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.413 19:13:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.672 [2024-11-26 19:13:27.003905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.672 [2024-11-26 19:13:27.047463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.672 [2024-11-26 19:13:27.047474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.672 [2024-11-26 19:13:27.103171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.672 [2024-11-26 19:13:27.103310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.672 [2024-11-26 19:13:27.103324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.955 19:13:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.955 spdk_app_start Round 1 00:05:31.955 19:13:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.955 19:13:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58094 /var/tmp/spdk-nbd.sock 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.955 19:13:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.955 19:13:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.955 19:13:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.955 19:13:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.955 Malloc0 00:05:31.955 19:13:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.214 Malloc1 00:05:32.214 19:13:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.214 19:13:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.473 /dev/nbd0 00:05:32.473 19:13:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.473 19:13:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.473 19:13:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.732 1+0 records in 00:05:32.732 1+0 records out 00:05:32.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279613 s, 14.6 MB/s 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.732 19:13:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.732 19:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.732 19:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.732 19:13:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.992 /dev/nbd1 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.992 1+0 records in 00:05:32.992 1+0 records out 00:05:32.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348435 s, 11.8 MB/s 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.992 19:13:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.992 19:13:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.250 19:13:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.251 { 00:05:33.251 "nbd_device": "/dev/nbd0", 00:05:33.251 "bdev_name": "Malloc0" 00:05:33.251 }, 00:05:33.251 { 00:05:33.251 "nbd_device": "/dev/nbd1", 00:05:33.251 "bdev_name": "Malloc1" 00:05:33.251 } 00:05:33.251 ]' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.251 { 00:05:33.251 "nbd_device": "/dev/nbd0", 00:05:33.251 "bdev_name": "Malloc0" 00:05:33.251 }, 00:05:33.251 { 00:05:33.251 "nbd_device": "/dev/nbd1", 00:05:33.251 "bdev_name": "Malloc1" 00:05:33.251 } 00:05:33.251 ]' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.251 /dev/nbd1' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.251 /dev/nbd1' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.251 256+0 records in 00:05:33.251 256+0 records out 00:05:33.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103091 s, 102 MB/s 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.251 256+0 records in 00:05:33.251 256+0 records out 00:05:33.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251432 s, 41.7 MB/s 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.251 19:13:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.509 256+0 records in 00:05:33.509 256+0 records out 00:05:33.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249409 s, 42.0 MB/s 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.509 19:13:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.768 19:13:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.027 19:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.286 19:13:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.286 19:13:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.545 19:13:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.806 [2024-11-26 19:13:33.132741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.806 [2024-11-26 19:13:33.180623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.806 [2024-11-26 19:13:33.180634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.806 [2024-11-26 19:13:33.240203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.806 [2024-11-26 19:13:33.240283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.806 [2024-11-26 19:13:33.240298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.097 spdk_app_start Round 2 00:05:38.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.097 19:13:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.097 19:13:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.097 19:13:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58094 /var/tmp/spdk-nbd.sock 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.097 19:13:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.097 19:13:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.097 19:13:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.097 19:13:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.097 Malloc0 00:05:38.097 19:13:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.357 Malloc1 00:05:38.357 19:13:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.357 19:13:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.616 /dev/nbd0 00:05:38.616 19:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.875 19:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.875 1+0 records in 00:05:38.875 1+0 records out 00:05:38.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216151 s, 18.9 MB/s 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.875 19:13:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.875 19:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.875 19:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.875 19:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.134 /dev/nbd1 00:05:39.134 19:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.134 19:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.134 19:13:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.134 19:13:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.134 19:13:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.134 19:13:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.135 1+0 records in 00:05:39.135 1+0 records out 00:05:39.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302828 s, 13.5 MB/s 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.135 19:13:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.135 19:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.135 19:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.135 19:13:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.135 19:13:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.135 19:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.394 { 00:05:39.394 "nbd_device": "/dev/nbd0", 00:05:39.394 "bdev_name": "Malloc0" 00:05:39.394 }, 00:05:39.394 { 00:05:39.394 "nbd_device": "/dev/nbd1", 00:05:39.394 "bdev_name": "Malloc1" 00:05:39.394 } 00:05:39.394 ]' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.394 { 00:05:39.394 "nbd_device": "/dev/nbd0", 00:05:39.394 "bdev_name": "Malloc0" 00:05:39.394 }, 00:05:39.394 { 00:05:39.394 "nbd_device": "/dev/nbd1", 00:05:39.394 "bdev_name": "Malloc1" 00:05:39.394 } 00:05:39.394 ]' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.394 /dev/nbd1' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.394 /dev/nbd1' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.394 256+0 records in 00:05:39.394 256+0 records out 00:05:39.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759974 s, 138 MB/s 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.394 256+0 records in 00:05:39.394 256+0 records out 00:05:39.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240785 s, 43.5 MB/s 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.394 256+0 records in 00:05:39.394 256+0 records out 00:05:39.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241541 s, 43.4 MB/s 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.394 19:13:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.653 19:13:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.913 19:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.172 19:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.432 19:13:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.432 19:13:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.000 19:13:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.000 [2024-11-26 19:13:39.318452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.000 [2024-11-26 19:13:39.362588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.000 [2024-11-26 19:13:39.362599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.000 [2024-11-26 19:13:39.420446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.000 [2024-11-26 19:13:39.420518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.000 [2024-11-26 19:13:39.420531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.333 19:13:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58094 /var/tmp/spdk-nbd.sock 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.333 19:13:42 event.app_repeat -- event/event.sh@39 -- # killprocess 58094 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58094 ']' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58094 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58094 00:05:44.333 killing process with pid 58094 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58094' 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58094 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58094 00:05:44.333 spdk_app_start is called in Round 0. 00:05:44.333 Shutdown signal received, stop current app iteration 00:05:44.333 Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 reinitialization... 00:05:44.333 spdk_app_start is called in Round 1. 00:05:44.333 Shutdown signal received, stop current app iteration 00:05:44.333 Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 reinitialization... 00:05:44.333 spdk_app_start is called in Round 2. 00:05:44.333 Shutdown signal received, stop current app iteration 00:05:44.333 Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 reinitialization... 00:05:44.333 spdk_app_start is called in Round 3. 00:05:44.333 Shutdown signal received, stop current app iteration 00:05:44.333 19:13:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.333 19:13:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.333 00:05:44.333 real 0m19.162s 00:05:44.333 user 0m43.774s 00:05:44.333 sys 0m2.897s 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.333 ************************************ 00:05:44.333 END TEST app_repeat 00:05:44.333 ************************************ 00:05:44.333 19:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.593 19:13:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.593 19:13:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.593 19:13:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.593 19:13:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.593 19:13:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.593 ************************************ 00:05:44.593 START TEST cpu_locks 00:05:44.593 ************************************ 00:05:44.593 19:13:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.593 * Looking for test storage... 00:05:44.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.593 19:13:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.593 19:13:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.593 19:13:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.593 19:13:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.593 19:13:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:44.593 19:13:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.593 19:13:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.593 19:13:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.593 19:13:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.593 --rc genhtml_branch_coverage=1 00:05:44.593 --rc genhtml_function_coverage=1 00:05:44.593 --rc genhtml_legend=1 00:05:44.593 --rc geninfo_all_blocks=1 00:05:44.593 --rc geninfo_unexecuted_blocks=1 00:05:44.593 00:05:44.593 ' 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.593 --rc genhtml_branch_coverage=1 00:05:44.593 --rc genhtml_function_coverage=1 00:05:44.593 --rc genhtml_legend=1 00:05:44.593 --rc geninfo_all_blocks=1 00:05:44.593 --rc geninfo_unexecuted_blocks=1 00:05:44.593 00:05:44.593 ' 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.593 --rc genhtml_branch_coverage=1 00:05:44.593 --rc genhtml_function_coverage=1 00:05:44.593 --rc genhtml_legend=1 00:05:44.593 --rc geninfo_all_blocks=1 00:05:44.593 --rc geninfo_unexecuted_blocks=1 00:05:44.593 00:05:44.593 ' 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.593 --rc genhtml_branch_coverage=1 00:05:44.593 --rc genhtml_function_coverage=1 00:05:44.593 --rc genhtml_legend=1 00:05:44.593 --rc geninfo_all_blocks=1 00:05:44.593 --rc geninfo_unexecuted_blocks=1 00:05:44.593 00:05:44.593 ' 00:05:44.593 19:13:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.593 19:13:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.593 19:13:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.593 19:13:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.593 19:13:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.593 ************************************ 00:05:44.593 START TEST default_locks 00:05:44.593 ************************************ 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58538 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58538 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.593 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.594 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.594 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.852 [2024-11-26 19:13:43.086350] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:44.852 [2024-11-26 19:13:43.087293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:05:44.852 [2024-11-26 19:13:43.234944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.852 [2024-11-26 19:13:43.290391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.112 [2024-11-26 19:13:43.365958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.371 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.371 19:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:45.371 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58538 00:05:45.371 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.371 19:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58538 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58538 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58538 ']' 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58538 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58538 00:05:45.630 killing process with pid 58538 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58538' 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58538 00:05:45.630 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58538 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58538 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58538 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.200 ERROR: process (pid: 58538) is no longer running 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58538 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.200 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58538) - No such process 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.200 00:05:46.200 real 0m1.444s 00:05:46.200 user 0m1.398s 00:05:46.200 sys 0m0.556s 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.200 19:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.200 ************************************ 00:05:46.200 END TEST default_locks 00:05:46.200 ************************************ 00:05:46.200 19:13:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.200 19:13:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.200 19:13:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.200 19:13:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.200 ************************************ 00:05:46.200 START TEST default_locks_via_rpc 00:05:46.200 ************************************ 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:46.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58583 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58583 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58583 ']' 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.200 19:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.200 [2024-11-26 19:13:44.575241] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:46.200 [2024-11-26 19:13:44.575480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58583 ] 00:05:46.460 [2024-11-26 19:13:44.719493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.460 [2024-11-26 19:13:44.777262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.460 [2024-11-26 19:13:44.850298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58583 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58583 00:05:46.721 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58583 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58583 ']' 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58583 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58583 00:05:47.291 killing process with pid 58583 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58583' 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58583 00:05:47.291 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58583 00:05:47.550 00:05:47.550 real 0m1.388s 00:05:47.550 user 0m1.360s 00:05:47.550 sys 0m0.531s 00:05:47.550 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.550 19:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.550 ************************************ 00:05:47.550 END TEST default_locks_via_rpc 00:05:47.550 ************************************ 00:05:47.550 19:13:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.550 19:13:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.550 19:13:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.550 19:13:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.550 ************************************ 00:05:47.550 START TEST non_locking_app_on_locked_coremask 00:05:47.550 ************************************ 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:47.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58626 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58626 /var/tmp/spdk.sock 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58626 ']' 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.550 19:13:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.810 [2024-11-26 19:13:46.015309] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:47.810 [2024-11-26 19:13:46.015537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58626 ] 00:05:47.810 [2024-11-26 19:13:46.156143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.810 [2024-11-26 19:13:46.219556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.069 [2024-11-26 19:13:46.294511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58635 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58635 /var/tmp/spdk2.sock 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58635 ']' 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.329 19:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.329 [2024-11-26 19:13:46.565931] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:48.329 [2024-11-26 19:13:46.566207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58635 ] 00:05:48.329 [2024-11-26 19:13:46.727565] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.329 [2024-11-26 19:13:46.727616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.588 [2024-11-26 19:13:46.841184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.588 [2024-11-26 19:13:46.981881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.525 19:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.525 19:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.525 19:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58626 00:05:49.525 19:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58626 00:05:49.525 19:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58626 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58626 ']' 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58626 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58626 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.092 killing process with pid 58626 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58626' 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58626 00:05:50.092 19:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58626 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58635 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58635 ']' 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58635 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58635 00:05:51.029 killing process with pid 58635 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58635' 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58635 00:05:51.029 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58635 00:05:51.288 ************************************ 00:05:51.288 END TEST non_locking_app_on_locked_coremask 00:05:51.288 ************************************ 00:05:51.288 00:05:51.288 real 0m3.742s 00:05:51.288 user 0m4.124s 00:05:51.288 sys 0m1.150s 00:05:51.288 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.288 19:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 19:13:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.547 19:13:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.547 19:13:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.547 19:13:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 ************************************ 00:05:51.547 START TEST locking_app_on_unlocked_coremask 00:05:51.547 ************************************ 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58702 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58702 /var/tmp/spdk.sock 00:05:51.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58702 ']' 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.547 19:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 [2024-11-26 19:13:49.812659] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:51.547 [2024-11-26 19:13:49.812890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:05:51.547 [2024-11-26 19:13:49.952778] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.547 [2024-11-26 19:13:49.953058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.807 [2024-11-26 19:13:50.016626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.807 [2024-11-26 19:13:50.093468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58710 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58710 /var/tmp/spdk2.sock 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58710 ']' 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.065 19:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.065 [2024-11-26 19:13:50.372887] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:52.065 [2024-11-26 19:13:50.373246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58710 ] 00:05:52.324 [2024-11-26 19:13:50.536691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.324 [2024-11-26 19:13:50.652503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.582 [2024-11-26 19:13:50.800596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.169 19:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.169 19:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.169 19:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58710 00:05:53.169 19:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58710 00:05:53.169 19:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58702 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58702 ']' 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58702 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58702 00:05:54.105 killing process with pid 58702 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58702' 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58702 00:05:54.105 19:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58702 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58710 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58710 ']' 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58710 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58710 00:05:54.673 killing process with pid 58710 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58710' 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58710 00:05:54.673 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58710 00:05:55.239 00:05:55.239 real 0m3.701s 00:05:55.239 user 0m4.051s 00:05:55.239 sys 0m1.146s 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.239 ************************************ 00:05:55.239 END TEST locking_app_on_unlocked_coremask 00:05:55.239 ************************************ 00:05:55.239 19:13:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.239 19:13:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.239 19:13:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.239 19:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.239 ************************************ 00:05:55.239 START TEST locking_app_on_locked_coremask 00:05:55.239 ************************************ 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58777 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58777 /var/tmp/spdk.sock 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58777 ']' 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.239 19:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.239 [2024-11-26 19:13:53.580315] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:55.239 [2024-11-26 19:13:53.580477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58777 ] 00:05:55.497 [2024-11-26 19:13:53.725436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.497 [2024-11-26 19:13:53.782470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.497 [2024-11-26 19:13:53.861761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58799 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58799 /var/tmp/spdk2.sock 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58799 /var/tmp/spdk2.sock 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.432 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58799 /var/tmp/spdk2.sock 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58799 ']' 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.433 19:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.433 [2024-11-26 19:13:54.693727] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:56.433 [2024-11-26 19:13:54.693835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58799 ] 00:05:56.433 [2024-11-26 19:13:54.858190] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58777 has claimed it. 00:05:56.433 [2024-11-26 19:13:54.858257] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.000 ERROR: process (pid: 58799) is no longer running 00:05:57.000 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58799) - No such process 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58777 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58777 00:05:57.000 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58777 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58777 ']' 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58777 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58777 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.566 killing process with pid 58777 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58777' 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58777 00:05:57.566 19:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58777 00:05:57.825 00:05:57.825 real 0m2.715s 00:05:57.825 user 0m3.176s 00:05:57.825 sys 0m0.661s 00:05:57.825 19:13:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.825 19:13:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 ************************************ 00:05:57.825 END TEST locking_app_on_locked_coremask 00:05:57.825 ************************************ 00:05:58.084 19:13:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.084 19:13:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.084 19:13:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.084 19:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.084 ************************************ 00:05:58.084 START TEST locking_overlapped_coremask 00:05:58.084 ************************************ 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58844 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58844 /var/tmp/spdk.sock 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58844 ']' 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.084 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.084 [2024-11-26 19:13:56.359545] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:58.084 [2024-11-26 19:13:56.359676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:05:58.084 [2024-11-26 19:13:56.511138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.343 [2024-11-26 19:13:56.576859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.343 [2024-11-26 19:13:56.577030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.343 [2024-11-26 19:13:56.577035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.343 [2024-11-26 19:13:56.654513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58855 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58855 /var/tmp/spdk2.sock 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58855 /var/tmp/spdk2.sock 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58855 /var/tmp/spdk2.sock 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58855 ']' 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.602 19:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.602 [2024-11-26 19:13:56.936210] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:05:58.602 [2024-11-26 19:13:56.936991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:05:58.916 [2024-11-26 19:13:57.100344] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58844 has claimed it. 00:05:58.916 [2024-11-26 19:13:57.100412] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.499 ERROR: process (pid: 58855) is no longer running 00:05:59.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58855) - No such process 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.499 19:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58844 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58844 ']' 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58844 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58844 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.500 killing process with pid 58844 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58844' 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58844 00:05:59.500 19:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58844 00:05:59.759 00:05:59.759 real 0m1.812s 00:05:59.759 user 0m4.830s 00:05:59.759 sys 0m0.446s 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.759 ************************************ 00:05:59.759 END TEST locking_overlapped_coremask 00:05:59.759 ************************************ 00:05:59.759 19:13:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.759 19:13:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.759 19:13:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.759 19:13:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.759 ************************************ 00:05:59.759 START TEST locking_overlapped_coremask_via_rpc 00:05:59.759 ************************************ 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58899 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58899 /var/tmp/spdk.sock 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.759 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.017 [2024-11-26 19:13:58.216050] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:00.017 [2024-11-26 19:13:58.216149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58899 ] 00:06:00.017 [2024-11-26 19:13:58.358081] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.017 [2024-11-26 19:13:58.358124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.017 [2024-11-26 19:13:58.412105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.017 [2024-11-26 19:13:58.412241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.017 [2024-11-26 19:13:58.412246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.276 [2024-11-26 19:13:58.489483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58911 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58911 /var/tmp/spdk2.sock 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58911 ']' 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.276 19:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.534 [2024-11-26 19:13:58.761002] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:00.534 [2024-11-26 19:13:58.761098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:06:00.534 [2024-11-26 19:13:58.929296] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.534 [2024-11-26 19:13:58.929346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.792 [2024-11-26 19:13:59.052036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.792 [2024-11-26 19:13:59.056010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.792 [2024-11-26 19:13:59.056008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.792 [2024-11-26 19:13:59.190947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 [2024-11-26 19:13:59.855034] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58899 has claimed it. 00:06:01.729 request: 00:06:01.729 { 00:06:01.729 "method": "framework_enable_cpumask_locks", 00:06:01.729 "req_id": 1 00:06:01.729 } 00:06:01.729 Got JSON-RPC error response 00:06:01.729 response: 00:06:01.729 { 00:06:01.729 "code": -32603, 00:06:01.729 "message": "Failed to claim CPU core: 2" 00:06:01.729 } 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58899 /var/tmp/spdk.sock 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.729 19:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58911 /var/tmp/spdk2.sock 00:06:01.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58911 ']' 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.729 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.300 ************************************ 00:06:02.300 END TEST locking_overlapped_coremask_via_rpc 00:06:02.300 ************************************ 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.300 00:06:02.300 real 0m2.315s 00:06:02.300 user 0m1.332s 00:06:02.300 sys 0m0.200s 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.300 19:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 19:14:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.300 19:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58899 ]] 00:06:02.300 19:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58899 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58899 ']' 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58899 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58899 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58899' 00:06:02.300 killing process with pid 58899 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58899 00:06:02.300 19:14:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58899 00:06:02.559 19:14:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58911 ]] 00:06:02.559 19:14:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58911 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58911 ']' 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58911 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58911 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.559 killing process with pid 58911 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58911' 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58911 00:06:02.559 19:14:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58911 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58899 ]] 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58899 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58899 ']' 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58899 00:06:03.127 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58899) - No such process 00:06:03.127 Process with pid 58899 is not found 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58899 is not found' 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58911 ]] 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58911 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58911 ']' 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58911 00:06:03.127 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58911) - No such process 00:06:03.127 Process with pid 58911 is not found 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58911 is not found' 00:06:03.127 19:14:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.127 00:06:03.127 real 0m18.566s 00:06:03.127 user 0m32.432s 00:06:03.127 sys 0m5.632s 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.127 19:14:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.127 ************************************ 00:06:03.127 END TEST cpu_locks 00:06:03.127 ************************************ 00:06:03.127 00:06:03.127 real 0m45.204s 00:06:03.127 user 1m28.670s 00:06:03.127 sys 0m9.299s 00:06:03.127 19:14:01 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.127 19:14:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.127 ************************************ 00:06:03.127 END TEST event 00:06:03.127 ************************************ 00:06:03.127 19:14:01 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.127 19:14:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.127 19:14:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.127 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.127 ************************************ 00:06:03.127 START TEST thread 00:06:03.127 ************************************ 00:06:03.128 19:14:01 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.128 * Looking for test storage... 00:06:03.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:03.128 19:14:01 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.128 19:14:01 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.128 19:14:01 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.387 19:14:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.387 19:14:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.387 19:14:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.387 19:14:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.387 19:14:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.387 19:14:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.387 19:14:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.387 19:14:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.387 19:14:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.387 19:14:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.387 19:14:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.387 19:14:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:03.387 19:14:01 thread -- scripts/common.sh@345 -- # : 1 00:06:03.387 19:14:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.387 19:14:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.387 19:14:01 thread -- scripts/common.sh@365 -- # decimal 1 00:06:03.387 19:14:01 thread -- scripts/common.sh@353 -- # local d=1 00:06:03.387 19:14:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.387 19:14:01 thread -- scripts/common.sh@355 -- # echo 1 00:06:03.387 19:14:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.387 19:14:01 thread -- scripts/common.sh@366 -- # decimal 2 00:06:03.387 19:14:01 thread -- scripts/common.sh@353 -- # local d=2 00:06:03.387 19:14:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.387 19:14:01 thread -- scripts/common.sh@355 -- # echo 2 00:06:03.387 19:14:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.387 19:14:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.387 19:14:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.387 19:14:01 thread -- scripts/common.sh@368 -- # return 0 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.387 --rc genhtml_branch_coverage=1 00:06:03.387 --rc genhtml_function_coverage=1 00:06:03.387 --rc genhtml_legend=1 00:06:03.387 --rc geninfo_all_blocks=1 00:06:03.387 --rc geninfo_unexecuted_blocks=1 00:06:03.387 00:06:03.387 ' 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.387 --rc genhtml_branch_coverage=1 00:06:03.387 --rc genhtml_function_coverage=1 00:06:03.387 --rc genhtml_legend=1 00:06:03.387 --rc geninfo_all_blocks=1 00:06:03.387 --rc geninfo_unexecuted_blocks=1 00:06:03.387 00:06:03.387 ' 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.387 --rc genhtml_branch_coverage=1 00:06:03.387 --rc genhtml_function_coverage=1 00:06:03.387 --rc genhtml_legend=1 00:06:03.387 --rc geninfo_all_blocks=1 00:06:03.387 --rc geninfo_unexecuted_blocks=1 00:06:03.387 00:06:03.387 ' 00:06:03.387 19:14:01 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.387 --rc genhtml_branch_coverage=1 00:06:03.388 --rc genhtml_function_coverage=1 00:06:03.388 --rc genhtml_legend=1 00:06:03.388 --rc geninfo_all_blocks=1 00:06:03.388 --rc geninfo_unexecuted_blocks=1 00:06:03.388 00:06:03.388 ' 00:06:03.388 19:14:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.388 19:14:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.388 19:14:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.388 19:14:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.388 ************************************ 00:06:03.388 START TEST thread_poller_perf 00:06:03.388 ************************************ 00:06:03.388 19:14:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.388 [2024-11-26 19:14:01.635963] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:03.388 [2024-11-26 19:14:01.636065] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:06:03.388 [2024-11-26 19:14:01.779546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.646 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.646 [2024-11-26 19:14:01.840741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.657 [2024-11-26T19:14:03.097Z] ====================================== 00:06:04.657 [2024-11-26T19:14:03.097Z] busy:2206676340 (cyc) 00:06:04.657 [2024-11-26T19:14:03.097Z] total_run_count: 367000 00:06:04.657 [2024-11-26T19:14:03.097Z] tsc_hz: 2200000000 (cyc) 00:06:04.657 [2024-11-26T19:14:03.097Z] ====================================== 00:06:04.657 [2024-11-26T19:14:03.097Z] poller_cost: 6012 (cyc), 2732 (nsec) 00:06:04.657 00:06:04.657 real 0m1.274s 00:06:04.657 user 0m1.126s 00:06:04.657 sys 0m0.042s 00:06:04.657 19:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.657 ************************************ 00:06:04.657 END TEST thread_poller_perf 00:06:04.657 ************************************ 00:06:04.657 19:14:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.657 19:14:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.657 19:14:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:04.657 19:14:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.657 19:14:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.657 ************************************ 00:06:04.657 START TEST thread_poller_perf 00:06:04.657 ************************************ 00:06:04.657 19:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.657 [2024-11-26 19:14:02.965977] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:04.657 [2024-11-26 19:14:02.966068] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:06:04.915 [2024-11-26 19:14:03.113365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.915 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.915 [2024-11-26 19:14:03.171384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.852 [2024-11-26T19:14:04.292Z] ====================================== 00:06:05.852 [2024-11-26T19:14:04.292Z] busy:2201918060 (cyc) 00:06:05.852 [2024-11-26T19:14:04.292Z] total_run_count: 4626000 00:06:05.852 [2024-11-26T19:14:04.292Z] tsc_hz: 2200000000 (cyc) 00:06:05.852 [2024-11-26T19:14:04.292Z] ====================================== 00:06:05.852 [2024-11-26T19:14:04.292Z] poller_cost: 475 (cyc), 215 (nsec) 00:06:05.852 00:06:05.852 real 0m1.271s 00:06:05.852 user 0m1.118s 00:06:05.852 sys 0m0.046s 00:06:05.852 19:14:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.852 19:14:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.852 ************************************ 00:06:05.852 END TEST thread_poller_perf 00:06:05.852 ************************************ 00:06:05.852 19:14:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.852 00:06:05.852 real 0m2.828s 00:06:05.852 user 0m2.375s 00:06:05.852 sys 0m0.240s 00:06:05.852 19:14:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.852 ************************************ 00:06:05.852 END TEST thread 00:06:05.852 ************************************ 00:06:05.852 19:14:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.111 19:14:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:06.111 19:14:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.111 19:14:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.111 19:14:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.111 19:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.111 ************************************ 00:06:06.111 START TEST app_cmdline 00:06:06.111 ************************************ 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.111 * Looking for test storage... 00:06:06.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.111 19:14:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.111 --rc genhtml_branch_coverage=1 00:06:06.111 --rc genhtml_function_coverage=1 00:06:06.111 --rc genhtml_legend=1 00:06:06.111 --rc geninfo_all_blocks=1 00:06:06.111 --rc geninfo_unexecuted_blocks=1 00:06:06.111 00:06:06.111 ' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.111 --rc genhtml_branch_coverage=1 00:06:06.111 --rc genhtml_function_coverage=1 00:06:06.111 --rc genhtml_legend=1 00:06:06.111 --rc geninfo_all_blocks=1 00:06:06.111 --rc geninfo_unexecuted_blocks=1 00:06:06.111 00:06:06.111 ' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.111 --rc genhtml_branch_coverage=1 00:06:06.111 --rc genhtml_function_coverage=1 00:06:06.111 --rc genhtml_legend=1 00:06:06.111 --rc geninfo_all_blocks=1 00:06:06.111 --rc geninfo_unexecuted_blocks=1 00:06:06.111 00:06:06.111 ' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.111 --rc genhtml_branch_coverage=1 00:06:06.111 --rc genhtml_function_coverage=1 00:06:06.111 --rc genhtml_legend=1 00:06:06.111 --rc geninfo_all_blocks=1 00:06:06.111 --rc geninfo_unexecuted_blocks=1 00:06:06.111 00:06:06.111 ' 00:06:06.111 19:14:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.111 19:14:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59159 00:06:06.111 19:14:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.111 19:14:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59159 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59159 ']' 00:06:06.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.111 19:14:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.111 [2024-11-26 19:14:04.546582] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:06.111 [2024-11-26 19:14:04.546693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:06:06.370 [2024-11-26 19:14:04.688109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.370 [2024-11-26 19:14:04.741867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.629 [2024-11-26 19:14:04.812413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.629 19:14:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.629 19:14:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:06.629 19:14:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:06.888 { 00:06:06.888 "version": "SPDK v25.01-pre git sha1 67afc973b", 00:06:06.888 "fields": { 00:06:06.888 "major": 25, 00:06:06.888 "minor": 1, 00:06:06.888 "patch": 0, 00:06:06.888 "suffix": "-pre", 00:06:06.888 "commit": "67afc973b" 00:06:06.888 } 00:06:06.888 } 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.888 19:14:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.888 19:14:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.888 19:14:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.888 19:14:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.147 19:14:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.147 19:14:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.147 19:14:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:07.147 19:14:05 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.407 request: 00:06:07.407 { 00:06:07.407 "method": "env_dpdk_get_mem_stats", 00:06:07.407 "req_id": 1 00:06:07.407 } 00:06:07.407 Got JSON-RPC error response 00:06:07.407 response: 00:06:07.407 { 00:06:07.407 "code": -32601, 00:06:07.407 "message": "Method not found" 00:06:07.407 } 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.407 19:14:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59159 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59159 ']' 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59159 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59159 00:06:07.407 killing process with pid 59159 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59159' 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 59159 00:06:07.407 19:14:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 59159 00:06:07.666 ************************************ 00:06:07.666 END TEST app_cmdline 00:06:07.666 ************************************ 00:06:07.666 00:06:07.666 real 0m1.751s 00:06:07.666 user 0m2.143s 00:06:07.666 sys 0m0.473s 00:06:07.666 19:14:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.666 19:14:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.924 19:14:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.924 19:14:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.924 19:14:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.924 19:14:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.924 ************************************ 00:06:07.924 START TEST version 00:06:07.924 ************************************ 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.924 * Looking for test storage... 00:06:07.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.924 19:14:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.924 19:14:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.924 19:14:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.924 19:14:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.924 19:14:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.924 19:14:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.924 19:14:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.924 19:14:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.924 19:14:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.924 19:14:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.924 19:14:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.924 19:14:06 version -- scripts/common.sh@344 -- # case "$op" in 00:06:07.924 19:14:06 version -- scripts/common.sh@345 -- # : 1 00:06:07.924 19:14:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.924 19:14:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.924 19:14:06 version -- scripts/common.sh@365 -- # decimal 1 00:06:07.924 19:14:06 version -- scripts/common.sh@353 -- # local d=1 00:06:07.924 19:14:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.924 19:14:06 version -- scripts/common.sh@355 -- # echo 1 00:06:07.924 19:14:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.924 19:14:06 version -- scripts/common.sh@366 -- # decimal 2 00:06:07.924 19:14:06 version -- scripts/common.sh@353 -- # local d=2 00:06:07.924 19:14:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.924 19:14:06 version -- scripts/common.sh@355 -- # echo 2 00:06:07.924 19:14:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.924 19:14:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.924 19:14:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.924 19:14:06 version -- scripts/common.sh@368 -- # return 0 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.924 --rc genhtml_branch_coverage=1 00:06:07.924 --rc genhtml_function_coverage=1 00:06:07.924 --rc genhtml_legend=1 00:06:07.924 --rc geninfo_all_blocks=1 00:06:07.924 --rc geninfo_unexecuted_blocks=1 00:06:07.924 00:06:07.924 ' 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.924 --rc genhtml_branch_coverage=1 00:06:07.924 --rc genhtml_function_coverage=1 00:06:07.924 --rc genhtml_legend=1 00:06:07.924 --rc geninfo_all_blocks=1 00:06:07.924 --rc geninfo_unexecuted_blocks=1 00:06:07.924 00:06:07.924 ' 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.924 --rc genhtml_branch_coverage=1 00:06:07.924 --rc genhtml_function_coverage=1 00:06:07.924 --rc genhtml_legend=1 00:06:07.924 --rc geninfo_all_blocks=1 00:06:07.924 --rc geninfo_unexecuted_blocks=1 00:06:07.924 00:06:07.924 ' 00:06:07.924 19:14:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.924 --rc genhtml_branch_coverage=1 00:06:07.924 --rc genhtml_function_coverage=1 00:06:07.924 --rc genhtml_legend=1 00:06:07.924 --rc geninfo_all_blocks=1 00:06:07.924 --rc geninfo_unexecuted_blocks=1 00:06:07.924 00:06:07.924 ' 00:06:07.924 19:14:06 version -- app/version.sh@17 -- # get_header_version major 00:06:07.924 19:14:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.924 19:14:06 version -- app/version.sh@14 -- # cut -f2 00:06:07.924 19:14:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.924 19:14:06 version -- app/version.sh@17 -- # major=25 00:06:07.924 19:14:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.924 19:14:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # cut -f2 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.925 19:14:06 version -- app/version.sh@18 -- # minor=1 00:06:07.925 19:14:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.925 19:14:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # cut -f2 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.925 19:14:06 version -- app/version.sh@19 -- # patch=0 00:06:07.925 19:14:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.925 19:14:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.925 19:14:06 version -- app/version.sh@14 -- # cut -f2 00:06:07.925 19:14:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.925 19:14:06 version -- app/version.sh@22 -- # version=25.1 00:06:07.925 19:14:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.925 19:14:06 version -- app/version.sh@28 -- # version=25.1rc0 00:06:07.925 19:14:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:07.925 19:14:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.185 19:14:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.185 19:14:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.185 00:06:08.185 real 0m0.255s 00:06:08.185 user 0m0.163s 00:06:08.185 sys 0m0.131s 00:06:08.185 ************************************ 00:06:08.185 END TEST version 00:06:08.185 ************************************ 00:06:08.185 19:14:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.185 19:14:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.185 19:14:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.185 19:14:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.185 19:14:06 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.185 19:14:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.185 19:14:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.185 19:14:06 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:08.185 19:14:06 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:08.185 19:14:06 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.185 19:14:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.185 19:14:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.185 19:14:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.185 ************************************ 00:06:08.185 START TEST spdk_dd 00:06:08.185 ************************************ 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.185 * Looking for test storage... 00:06:08.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.185 --rc genhtml_branch_coverage=1 00:06:08.185 --rc genhtml_function_coverage=1 00:06:08.185 --rc genhtml_legend=1 00:06:08.185 --rc geninfo_all_blocks=1 00:06:08.185 --rc geninfo_unexecuted_blocks=1 00:06:08.185 00:06:08.185 ' 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.185 --rc genhtml_branch_coverage=1 00:06:08.185 --rc genhtml_function_coverage=1 00:06:08.185 --rc genhtml_legend=1 00:06:08.185 --rc geninfo_all_blocks=1 00:06:08.185 --rc geninfo_unexecuted_blocks=1 00:06:08.185 00:06:08.185 ' 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.185 --rc genhtml_branch_coverage=1 00:06:08.185 --rc genhtml_function_coverage=1 00:06:08.185 --rc genhtml_legend=1 00:06:08.185 --rc geninfo_all_blocks=1 00:06:08.185 --rc geninfo_unexecuted_blocks=1 00:06:08.185 00:06:08.185 ' 00:06:08.185 19:14:06 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.185 --rc genhtml_branch_coverage=1 00:06:08.185 --rc genhtml_function_coverage=1 00:06:08.185 --rc genhtml_legend=1 00:06:08.185 --rc geninfo_all_blocks=1 00:06:08.185 --rc geninfo_unexecuted_blocks=1 00:06:08.185 00:06:08.185 ' 00:06:08.185 19:14:06 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.185 19:14:06 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.185 19:14:06 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.185 19:14:06 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.185 19:14:06 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.185 19:14:06 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:08.185 19:14:06 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.185 19:14:06 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.763 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.763 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.763 19:14:06 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:08.763 19:14:06 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:08.763 19:14:06 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:08.763 19:14:07 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:08.764 19:14:07 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.764 19:14:07 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.764 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.765 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:08.766 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.767 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:08.768 * spdk_dd linked to liburing 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:08.768 19:14:07 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:08.768 19:14:07 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:08.769 19:14:07 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:08.770 19:14:07 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:08.770 19:14:07 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:08.770 19:14:07 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:08.770 19:14:07 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:08.770 19:14:07 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:08.770 19:14:07 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:08.770 19:14:07 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:08.770 19:14:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.770 19:14:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.770 19:14:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:08.770 ************************************ 00:06:08.770 START TEST spdk_dd_basic_rw 00:06:08.770 ************************************ 00:06:08.770 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:08.770 * Looking for test storage... 00:06:08.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.770 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.770 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.770 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.030 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:09.031 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:09.293 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:09.293 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.294 ************************************ 00:06:09.294 START TEST dd_bs_lt_native_bs 00:06:09.294 ************************************ 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.294 19:14:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.294 { 00:06:09.294 "subsystems": [ 00:06:09.294 { 00:06:09.294 "subsystem": "bdev", 00:06:09.294 "config": [ 00:06:09.294 { 00:06:09.294 "params": { 00:06:09.294 "trtype": "pcie", 00:06:09.294 "traddr": "0000:00:10.0", 00:06:09.294 "name": "Nvme0" 00:06:09.294 }, 00:06:09.294 "method": "bdev_nvme_attach_controller" 00:06:09.294 }, 00:06:09.294 { 00:06:09.294 "method": "bdev_wait_for_examine" 00:06:09.294 } 00:06:09.294 ] 00:06:09.294 } 00:06:09.294 ] 00:06:09.294 } 00:06:09.294 [2024-11-26 19:14:07.549694] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:09.294 [2024-11-26 19:14:07.549800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59498 ] 00:06:09.294 [2024-11-26 19:14:07.703296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.593 [2024-11-26 19:14:07.764316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.593 [2024-11-26 19:14:07.821716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.593 [2024-11-26 19:14:07.933045] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:09.593 [2024-11-26 19:14:07.933122] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.878 [2024-11-26 19:14:08.060186] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.878 00:06:09.878 real 0m0.628s 00:06:09.878 user 0m0.424s 00:06:09.878 sys 0m0.158s 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:09.878 ************************************ 00:06:09.878 END TEST dd_bs_lt_native_bs 00:06:09.878 ************************************ 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.878 ************************************ 00:06:09.878 START TEST dd_rw 00:06:09.878 ************************************ 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:09.878 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.446 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:10.446 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.446 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.446 19:14:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.446 { 00:06:10.446 "subsystems": [ 00:06:10.446 { 00:06:10.446 "subsystem": "bdev", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "params": { 00:06:10.446 "trtype": "pcie", 00:06:10.446 "traddr": "0000:00:10.0", 00:06:10.446 "name": "Nvme0" 00:06:10.446 }, 00:06:10.446 "method": "bdev_nvme_attach_controller" 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_wait_for_examine" 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 } 00:06:10.446 [2024-11-26 19:14:08.827670] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:10.446 [2024-11-26 19:14:08.827789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:06:10.705 [2024-11-26 19:14:08.975230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.705 [2024-11-26 19:14:09.024857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.705 [2024-11-26 19:14:09.077559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.969  [2024-11-26T19:14:09.409Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:10.969 00:06:10.969 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:10.969 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:10.969 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.969 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.228 [2024-11-26 19:14:09.431172] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:11.228 [2024-11-26 19:14:09.431296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:06:11.228 { 00:06:11.228 "subsystems": [ 00:06:11.228 { 00:06:11.228 "subsystem": "bdev", 00:06:11.228 "config": [ 00:06:11.228 { 00:06:11.228 "params": { 00:06:11.228 "trtype": "pcie", 00:06:11.228 "traddr": "0000:00:10.0", 00:06:11.228 "name": "Nvme0" 00:06:11.228 }, 00:06:11.228 "method": "bdev_nvme_attach_controller" 00:06:11.228 }, 00:06:11.228 { 00:06:11.228 "method": "bdev_wait_for_examine" 00:06:11.228 } 00:06:11.228 ] 00:06:11.228 } 00:06:11.228 ] 00:06:11.228 } 00:06:11.228 [2024-11-26 19:14:09.577382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.228 [2024-11-26 19:14:09.626975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.487 [2024-11-26 19:14:09.684936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.487  [2024-11-26T19:14:10.186Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:11.746 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.746 19:14:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.746 [2024-11-26 19:14:10.051325] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:11.746 [2024-11-26 19:14:10.051428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:06:11.746 { 00:06:11.746 "subsystems": [ 00:06:11.746 { 00:06:11.746 "subsystem": "bdev", 00:06:11.746 "config": [ 00:06:11.746 { 00:06:11.746 "params": { 00:06:11.746 "trtype": "pcie", 00:06:11.746 "traddr": "0000:00:10.0", 00:06:11.746 "name": "Nvme0" 00:06:11.746 }, 00:06:11.746 "method": "bdev_nvme_attach_controller" 00:06:11.746 }, 00:06:11.746 { 00:06:11.746 "method": "bdev_wait_for_examine" 00:06:11.746 } 00:06:11.746 ] 00:06:11.746 } 00:06:11.746 ] 00:06:11.746 } 00:06:12.004 [2024-11-26 19:14:10.198394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.004 [2024-11-26 19:14:10.251085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.004 [2024-11-26 19:14:10.304130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.004  [2024-11-26T19:14:10.703Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:12.263 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.263 19:14:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.829 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:12.829 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.829 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.829 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.829 [2024-11-26 19:14:11.241725] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:12.830 [2024-11-26 19:14:11.241830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:06:12.830 { 00:06:12.830 "subsystems": [ 00:06:12.830 { 00:06:12.830 "subsystem": "bdev", 00:06:12.830 "config": [ 00:06:12.830 { 00:06:12.830 "params": { 00:06:12.830 "trtype": "pcie", 00:06:12.830 "traddr": "0000:00:10.0", 00:06:12.830 "name": "Nvme0" 00:06:12.830 }, 00:06:12.830 "method": "bdev_nvme_attach_controller" 00:06:12.830 }, 00:06:12.830 { 00:06:12.830 "method": "bdev_wait_for_examine" 00:06:12.830 } 00:06:12.830 ] 00:06:12.830 } 00:06:12.830 ] 00:06:12.830 } 00:06:13.088 [2024-11-26 19:14:11.389447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.089 [2024-11-26 19:14:11.438442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.089 [2024-11-26 19:14:11.491686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.348  [2024-11-26T19:14:11.788Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:13.348 00:06:13.607 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:13.607 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.607 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.607 19:14:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.607 { 00:06:13.607 "subsystems": [ 00:06:13.607 { 00:06:13.607 "subsystem": "bdev", 00:06:13.607 "config": [ 00:06:13.607 { 00:06:13.607 "params": { 00:06:13.607 "trtype": "pcie", 00:06:13.608 "traddr": "0000:00:10.0", 00:06:13.608 "name": "Nvme0" 00:06:13.608 }, 00:06:13.608 "method": "bdev_nvme_attach_controller" 00:06:13.608 }, 00:06:13.608 { 00:06:13.608 "method": "bdev_wait_for_examine" 00:06:13.608 } 00:06:13.608 ] 00:06:13.608 } 00:06:13.608 ] 00:06:13.608 } 00:06:13.608 [2024-11-26 19:14:11.845779] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:13.608 [2024-11-26 19:14:11.845879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:06:13.608 [2024-11-26 19:14:11.994651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.867 [2024-11-26 19:14:12.047199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.867 [2024-11-26 19:14:12.102272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.867  [2024-11-26T19:14:12.566Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:14.126 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.126 19:14:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.126 { 00:06:14.126 "subsystems": [ 00:06:14.126 { 00:06:14.126 "subsystem": "bdev", 00:06:14.126 "config": [ 00:06:14.126 { 00:06:14.126 "params": { 00:06:14.126 "trtype": "pcie", 00:06:14.126 "traddr": "0000:00:10.0", 00:06:14.126 "name": "Nvme0" 00:06:14.126 }, 00:06:14.126 "method": "bdev_nvme_attach_controller" 00:06:14.126 }, 00:06:14.126 { 00:06:14.126 "method": "bdev_wait_for_examine" 00:06:14.126 } 00:06:14.126 ] 00:06:14.126 } 00:06:14.126 ] 00:06:14.126 } 00:06:14.126 [2024-11-26 19:14:12.461889] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:14.126 [2024-11-26 19:14:12.462023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59617 ] 00:06:14.385 [2024-11-26 19:14:12.609381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.385 [2024-11-26 19:14:12.662478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.385 [2024-11-26 19:14:12.716359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.644  [2024-11-26T19:14:13.084Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:14.644 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.644 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.211 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:15.211 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:15.211 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.211 19:14:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.211 [2024-11-26 19:14:13.592852] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:15.211 [2024-11-26 19:14:13.592960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:06:15.211 { 00:06:15.211 "subsystems": [ 00:06:15.211 { 00:06:15.211 "subsystem": "bdev", 00:06:15.211 "config": [ 00:06:15.211 { 00:06:15.211 "params": { 00:06:15.211 "trtype": "pcie", 00:06:15.212 "traddr": "0000:00:10.0", 00:06:15.212 "name": "Nvme0" 00:06:15.212 }, 00:06:15.212 "method": "bdev_nvme_attach_controller" 00:06:15.212 }, 00:06:15.212 { 00:06:15.212 "method": "bdev_wait_for_examine" 00:06:15.212 } 00:06:15.212 ] 00:06:15.212 } 00:06:15.212 ] 00:06:15.212 } 00:06:15.471 [2024-11-26 19:14:13.731555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.471 [2024-11-26 19:14:13.786772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.471 [2024-11-26 19:14:13.844408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.730  [2024-11-26T19:14:14.170Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:15.730 00:06:15.730 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:15.730 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.730 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.730 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.989 { 00:06:15.989 "subsystems": [ 00:06:15.989 { 00:06:15.989 "subsystem": "bdev", 00:06:15.989 "config": [ 00:06:15.989 { 00:06:15.989 "params": { 00:06:15.989 "trtype": "pcie", 00:06:15.989 "traddr": "0000:00:10.0", 00:06:15.989 "name": "Nvme0" 00:06:15.989 }, 00:06:15.989 "method": "bdev_nvme_attach_controller" 00:06:15.989 }, 00:06:15.989 { 00:06:15.989 "method": "bdev_wait_for_examine" 00:06:15.989 } 00:06:15.989 ] 00:06:15.989 } 00:06:15.989 ] 00:06:15.989 } 00:06:15.989 [2024-11-26 19:14:14.196817] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:15.989 [2024-11-26 19:14:14.196949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:06:15.989 [2024-11-26 19:14:14.342684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.989 [2024-11-26 19:14:14.396925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.247 [2024-11-26 19:14:14.455790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.247  [2024-11-26T19:14:14.946Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:16.506 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.506 19:14:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.506 [2024-11-26 19:14:14.809601] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:16.506 [2024-11-26 19:14:14.810532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:06:16.506 { 00:06:16.506 "subsystems": [ 00:06:16.506 { 00:06:16.506 "subsystem": "bdev", 00:06:16.506 "config": [ 00:06:16.506 { 00:06:16.506 "params": { 00:06:16.506 "trtype": "pcie", 00:06:16.506 "traddr": "0000:00:10.0", 00:06:16.506 "name": "Nvme0" 00:06:16.506 }, 00:06:16.506 "method": "bdev_nvme_attach_controller" 00:06:16.506 }, 00:06:16.506 { 00:06:16.506 "method": "bdev_wait_for_examine" 00:06:16.506 } 00:06:16.506 ] 00:06:16.506 } 00:06:16.506 ] 00:06:16.506 } 00:06:16.765 [2024-11-26 19:14:14.957402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.765 [2024-11-26 19:14:15.003879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.765 [2024-11-26 19:14:15.059193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.765  [2024-11-26T19:14:15.464Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:17.024 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:17.024 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:17.592 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:17.592 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.592 19:14:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 [2024-11-26 19:14:15.932009] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:17.592 [2024-11-26 19:14:15.932129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:06:17.592 { 00:06:17.592 "subsystems": [ 00:06:17.592 { 00:06:17.592 "subsystem": "bdev", 00:06:17.592 "config": [ 00:06:17.592 { 00:06:17.592 "params": { 00:06:17.592 "trtype": "pcie", 00:06:17.592 "traddr": "0000:00:10.0", 00:06:17.592 "name": "Nvme0" 00:06:17.592 }, 00:06:17.592 "method": "bdev_nvme_attach_controller" 00:06:17.592 }, 00:06:17.592 { 00:06:17.592 "method": "bdev_wait_for_examine" 00:06:17.592 } 00:06:17.592 ] 00:06:17.592 } 00:06:17.592 ] 00:06:17.592 } 00:06:17.851 [2024-11-26 19:14:16.072493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.851 [2024-11-26 19:14:16.127105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.851 [2024-11-26 19:14:16.182556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.109  [2024-11-26T19:14:16.549Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:18.110 00:06:18.110 19:14:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:18.110 19:14:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:18.110 19:14:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.110 19:14:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.110 [2024-11-26 19:14:16.513119] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:18.110 [2024-11-26 19:14:16.513245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59703 ] 00:06:18.110 { 00:06:18.110 "subsystems": [ 00:06:18.110 { 00:06:18.110 "subsystem": "bdev", 00:06:18.110 "config": [ 00:06:18.110 { 00:06:18.110 "params": { 00:06:18.110 "trtype": "pcie", 00:06:18.110 "traddr": "0000:00:10.0", 00:06:18.110 "name": "Nvme0" 00:06:18.110 }, 00:06:18.110 "method": "bdev_nvme_attach_controller" 00:06:18.110 }, 00:06:18.110 { 00:06:18.110 "method": "bdev_wait_for_examine" 00:06:18.110 } 00:06:18.110 ] 00:06:18.110 } 00:06:18.110 ] 00:06:18.110 } 00:06:18.369 [2024-11-26 19:14:16.651795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.369 [2024-11-26 19:14:16.697649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.369 [2024-11-26 19:14:16.751724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.627  [2024-11-26T19:14:17.068Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:18.628 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.628 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.887 { 00:06:18.887 "subsystems": [ 00:06:18.887 { 00:06:18.887 "subsystem": "bdev", 00:06:18.887 "config": [ 00:06:18.887 { 00:06:18.887 "params": { 00:06:18.887 "trtype": "pcie", 00:06:18.887 "traddr": "0000:00:10.0", 00:06:18.887 "name": "Nvme0" 00:06:18.887 }, 00:06:18.887 "method": "bdev_nvme_attach_controller" 00:06:18.887 }, 00:06:18.887 { 00:06:18.887 "method": "bdev_wait_for_examine" 00:06:18.887 } 00:06:18.887 ] 00:06:18.887 } 00:06:18.887 ] 00:06:18.887 } 00:06:18.887 [2024-11-26 19:14:17.104871] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:18.887 [2024-11-26 19:14:17.105458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:06:18.887 [2024-11-26 19:14:17.252752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.887 [2024-11-26 19:14:17.300756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.146 [2024-11-26 19:14:17.356577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.146  [2024-11-26T19:14:17.845Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:19.405 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:19.405 19:14:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.974 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:19.974 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:19.974 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.974 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.974 [2024-11-26 19:14:18.165484] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:19.974 [2024-11-26 19:14:18.165611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:06:19.974 { 00:06:19.974 "subsystems": [ 00:06:19.974 { 00:06:19.974 "subsystem": "bdev", 00:06:19.974 "config": [ 00:06:19.974 { 00:06:19.974 "params": { 00:06:19.974 "trtype": "pcie", 00:06:19.974 "traddr": "0000:00:10.0", 00:06:19.974 "name": "Nvme0" 00:06:19.974 }, 00:06:19.974 "method": "bdev_nvme_attach_controller" 00:06:19.974 }, 00:06:19.974 { 00:06:19.974 "method": "bdev_wait_for_examine" 00:06:19.974 } 00:06:19.974 ] 00:06:19.974 } 00:06:19.974 ] 00:06:19.974 } 00:06:19.974 [2024-11-26 19:14:18.312904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.974 [2024-11-26 19:14:18.356610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.974 [2024-11-26 19:14:18.409499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.234  [2024-11-26T19:14:18.934Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:20.494 00:06:20.494 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:20.494 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:20.494 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.494 19:14:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.494 [2024-11-26 19:14:18.737490] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:20.494 [2024-11-26 19:14:18.737623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:06:20.494 { 00:06:20.494 "subsystems": [ 00:06:20.494 { 00:06:20.494 "subsystem": "bdev", 00:06:20.494 "config": [ 00:06:20.494 { 00:06:20.494 "params": { 00:06:20.494 "trtype": "pcie", 00:06:20.494 "traddr": "0000:00:10.0", 00:06:20.494 "name": "Nvme0" 00:06:20.494 }, 00:06:20.494 "method": "bdev_nvme_attach_controller" 00:06:20.494 }, 00:06:20.494 { 00:06:20.494 "method": "bdev_wait_for_examine" 00:06:20.494 } 00:06:20.494 ] 00:06:20.494 } 00:06:20.494 ] 00:06:20.494 } 00:06:20.494 [2024-11-26 19:14:18.875232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.494 [2024-11-26 19:14:18.923130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.753 [2024-11-26 19:14:18.978583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.753  [2024-11-26T19:14:19.451Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:21.011 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.011 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.011 { 00:06:21.011 "subsystems": [ 00:06:21.011 { 00:06:21.011 "subsystem": "bdev", 00:06:21.011 "config": [ 00:06:21.011 { 00:06:21.011 "params": { 00:06:21.011 "trtype": "pcie", 00:06:21.011 "traddr": "0000:00:10.0", 00:06:21.011 "name": "Nvme0" 00:06:21.011 }, 00:06:21.012 "method": "bdev_nvme_attach_controller" 00:06:21.012 }, 00:06:21.012 { 00:06:21.012 "method": "bdev_wait_for_examine" 00:06:21.012 } 00:06:21.012 ] 00:06:21.012 } 00:06:21.012 ] 00:06:21.012 } 00:06:21.012 [2024-11-26 19:14:19.337936] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:21.012 [2024-11-26 19:14:19.338033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59772 ] 00:06:21.269 [2024-11-26 19:14:19.485115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.269 [2024-11-26 19:14:19.531179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.269 [2024-11-26 19:14:19.584723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.269  [2024-11-26T19:14:19.968Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.528 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.528 19:14:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.099 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:22.099 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:22.099 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.099 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.099 [2024-11-26 19:14:20.397860] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:22.099 [2024-11-26 19:14:20.398405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:06:22.099 { 00:06:22.099 "subsystems": [ 00:06:22.099 { 00:06:22.099 "subsystem": "bdev", 00:06:22.099 "config": [ 00:06:22.099 { 00:06:22.099 "params": { 00:06:22.099 "trtype": "pcie", 00:06:22.099 "traddr": "0000:00:10.0", 00:06:22.099 "name": "Nvme0" 00:06:22.099 }, 00:06:22.099 "method": "bdev_nvme_attach_controller" 00:06:22.100 }, 00:06:22.100 { 00:06:22.100 "method": "bdev_wait_for_examine" 00:06:22.100 } 00:06:22.100 ] 00:06:22.100 } 00:06:22.100 ] 00:06:22.100 } 00:06:22.360 [2024-11-26 19:14:20.547278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.360 [2024-11-26 19:14:20.595277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.360 [2024-11-26 19:14:20.649755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.360  [2024-11-26T19:14:21.059Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:22.619 00:06:22.619 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:22.619 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:22.619 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.619 19:14:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.619 [2024-11-26 19:14:20.996443] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:22.619 [2024-11-26 19:14:20.997108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:06:22.619 { 00:06:22.619 "subsystems": [ 00:06:22.619 { 00:06:22.619 "subsystem": "bdev", 00:06:22.619 "config": [ 00:06:22.619 { 00:06:22.619 "params": { 00:06:22.619 "trtype": "pcie", 00:06:22.619 "traddr": "0000:00:10.0", 00:06:22.619 "name": "Nvme0" 00:06:22.619 }, 00:06:22.619 "method": "bdev_nvme_attach_controller" 00:06:22.619 }, 00:06:22.619 { 00:06:22.619 "method": "bdev_wait_for_examine" 00:06:22.619 } 00:06:22.619 ] 00:06:22.619 } 00:06:22.619 ] 00:06:22.619 } 00:06:22.878 [2024-11-26 19:14:21.139715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.878 [2024-11-26 19:14:21.191711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.878 [2024-11-26 19:14:21.249899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.138  [2024-11-26T19:14:21.578Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:23.138 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.138 19:14:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.397 { 00:06:23.397 "subsystems": [ 00:06:23.397 { 00:06:23.397 "subsystem": "bdev", 00:06:23.397 "config": [ 00:06:23.397 { 00:06:23.397 "params": { 00:06:23.397 "trtype": "pcie", 00:06:23.397 "traddr": "0000:00:10.0", 00:06:23.397 "name": "Nvme0" 00:06:23.397 }, 00:06:23.397 "method": "bdev_nvme_attach_controller" 00:06:23.397 }, 00:06:23.397 { 00:06:23.397 "method": "bdev_wait_for_examine" 00:06:23.397 } 00:06:23.397 ] 00:06:23.397 } 00:06:23.397 ] 00:06:23.397 } 00:06:23.397 [2024-11-26 19:14:21.609674] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:23.397 [2024-11-26 19:14:21.609798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:06:23.397 [2024-11-26 19:14:21.761675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.397 [2024-11-26 19:14:21.809777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.656 [2024-11-26 19:14:21.865863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.656  [2024-11-26T19:14:22.355Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:23.915 00:06:23.915 00:06:23.915 real 0m13.987s 00:06:23.915 user 0m10.115s 00:06:23.915 sys 0m5.420s 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.915 ************************************ 00:06:23.915 END TEST dd_rw 00:06:23.915 ************************************ 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.915 ************************************ 00:06:23.915 START TEST dd_rw_offset 00:06:23.915 ************************************ 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.915 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:23.916 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=95kut7jzpg6xqargdbn5bqt7likdldd3tmx1c5o2glr3oqi30ajnsj8w1170l6tlupte92eh13qlk3f38byofwc8doi7ccz1wc21inshl9ruujsskfcxibgysnfwxkrfmguaymzbrtuxd0ofvuphwwqgy5bf5sw7kahj4q2ncrh2m4ikf2nydf078lv16wegldeu715mf4426n9hze7ld9pfgrgbkwr4ramnks0v36puil4g3zwp12w6rr4cbtn2zdgkw0e4kkwkme8demvcto45iabormymgxreokszzmko07h13plj5j9g3ywubqsd08jp4hn7a6wpq34lmmc16urc0059zw927sy4afyr79tlktnv0hsnyjaer82g00cupqyvrifv2wxgdidzmrvl3fgmoj9ipvc43qntd7xpvduopinij5jdml0tmst8h23znmhooyn687p6mgripexz2cbejfumikml578p3eryaxh5o2m4mzgaofnmd4xosie26hxz66gt23yn3dvss5oslscxrjbe57l90y6kuxhkwjmes1hz0i486s8mkv47jh17q8zrj4v2oka0d0r9opt4ayscn7vy9gbmvszk3c1ofthggev6dpkbzslh5nocc3zhycok1ol0xwtus3gtsn3sa4vvl15x78uxkyhbh81hexgijact23zmltroxbl6jepthqrelqwi5i4wzhh9owri9ekp8du5mpuujhkprtpzyuqjfhvqwmht2xf9u46n59hynd5238jnuubxi3n6uu3o64ibf2wzskuog8s0lsga1ljfyrz63skuyklsky5f3f77sg6v7fzg88y3jb36se7t5zejf74n22cnyh6ddkeqjm85qy3v3sevz21xc2r83zf44tqqon1v9yvfjsk62cfvukntz7byxzxu0u5q1ct1pkgorkq19fo98wgltcuc8b1avkr1rjb3rn8xympy6nsojjzrly7l0pvx463wdkkocoubo550pd39xvxmqrcuz2gvrevg2jg8u59dx6bjk82cnzr8qsen86w8yktn49yvwn00ahuc8tml62hloo2ixaqb7y3argt7q0luqj3xm73e2xvxapxzrbqtqgyv6mfsvet5apt73pv1ium2lbdjzqrf7ck5g0xgrrwopa292l7ifemiguxde6cip8w0hltdusi4jnqbhiu5dd0y2qu0xhoxuqv9gimu1k0roob5iu07rvbdt7643nwyhmt236hpa1r20rvqjfw21s4egyc3ff28dtqbp85tyrad8o0hzxq55fekenymz5m0273j8aeh9ykn64w7yk1ui6o24hzicwlmz7yfb3gy9gdy6cafdzqhzps3pvhw7dfqh99j1hlxw1qqfl2iten102mcbpp518cgvdbx3cm3s34ez9t7ihvtfh2r2tqipee6hrfa7qsvl79j6imjlob729bgprsmvrlpk1u171s1txh0di03uoaeumnoy761csrkp40bxlfnb6p1oam0orw6c2xg1ohgfnzzwn5ijdgjsmspi9unmnsfmkma29ih3kekpsq7gq8u9ox7jb4e3ph1bbzjysfd7w28ccqz44mnh7m97hxlu7mjwxp4utclkj532nl8uzpi3xfhcaywty1g3xlhuep6kc34s0bfrzhkars04jcaekq7rk78dsmx7yqrto01nsmwuyv0k9tlmp6amqbzkxuotbqzcjimy8bfnyed0kz8skfids4uwxobz4n0i1r4b3myb6ew3hcm691lbrgf8tl3z0q5r860neu9ebqi2uiuyab9413atdxd6b2xt9dsdpezzbbmgf4lwplmndu96ynbe4cs4u9y00n7cs0tle3l9ovzi17z8mz3mbihv8bciy9vcqossw62l0hjgnl5kbwumcajrs8l4eef3j4174lupy1axdxd9iqksuizod38eruwm1trhmpw9nvv2g498uj4sn73rmdvogx5tiz656i3enegmhjieure57rhgbshoh58nfcr2iynqc03g8cl05psareqq10xdol9e93zzbob9k13owc1kftjf6gsph70pyttr8mibx0tkvt8n36jwz9qyws186zcrapziiwlp17lnvceg4tanseuow0ytmeszvwolaonzga4n67rtp2ah24xrv5lc17velgo8mftg2cgyh5odtkh17shnir72uuex2yp3kzaug1d7k0ox57zc93772mc33734j0qrt9xz9triii0jakjvwtn3yy4ox658puk7medtvc6lu57j3hekfuh267jebl29wglc1y2468c6myndyig03dd4mft6rb76rglrdlttsqtpgd6dvestaatli29rr4vd7mhq88blp5c2agdlhcg9xjv6zjtx73ngt5qneme0zgaxis6lrv1pq3vu50l5x383b1ixom92z5afx4qw9bul7ctfee8xs5gqh46twvaifio93okhav9jvjzdhk2hqw2zhu83iw8abby6fy1vit82um7o927t492aq175fqpoa5w6ju8ugnt8y1h28z35na8bcchrwd1uswgy05qyij341lgn1ku1f22q3949zuepmuttiwdth3dkwrj6ao133zezeam4buizhszwvec6j70hvtlfetpuntbolzwbxax6jopui9cwi6uqwumjew0fes62bdldfxg8uk6j53j4avjwv685zgsst4792kj0wdf0xbjp4azblxwm3h2uh1c1pd2fy0ixchb2068gd87z35grad7ldy2us7zyn4a97zqq8x3f3541gmwtmts9t5ixfrhggxngk603jtwd3ny55fk6w57q32o5buvpsfzd1boft2uiyea4640bn21iwf6a1i7mozl7dib8ek8lbmp0hb10jtldvguwn0yx3x5yz42x4w137bczsdixr9ih7954j03nsc1e2etwi2nkr09hczwwa14pa6krwm8ctrj7fr8p40unxub07ni64lykoiqwoeapanfekcoxg0le4ndqddl70r89bycrdw0aa9h4sgutpsi3dx0zx53sl9cm5643do10jsq5ucwdtdcdexkf68dmf4llhrucp6i8wsjtf9skyq6ewt84w6q0lyh8u8d0bjmf0hdxdavlgslfk5fuz5w5d77tl1m4n2x68qxdrbhdw5lbl5xs870nizj3sreplgl8tsykpwljd0x1egb8y4w2sufd2mf4opy2ss9sddinkln496grxdnih4ltbak2mt8jqflprtwxow6oomz87nvbjl3y6bl6ctsgaptrnsrppiarmbcuyoigj9i88cgy80h9ubzx20utbwv26nli0eez4vveff9bd3vq35hjv0y7dmaye2g8gs157q76dlnlzphl9lb2qwwqgmep97auq2cbo1vbhpb4tl4x77o713hdhwb749qldbdcxcx43juvse8edljgk6h8hbu5bni1b76ilgn1o7f8an1wz9wd1nyhz8gv7bieppv19nuo260p6deodiepxyf47m8yywt7wl9sxl5x9uhoxvkvwze93f8tw91oo3wqyn20emzwn5ihbsca0lqyd66qeahtl0isg1whxszuq6vvf3m8tuy7xehxlquzemwhacm8yeb5vvl7k9b2fmgmrg381uf7mlp6k4w4fbdltul8oan1psbv5qdw8j9j20vxwz2txms6491z1u8e90nmyvjc60z562xdlu853j07n90df7uva9xvrfzipiyehg3qcosrntmsri1t34uex7gdo5fuijn0d15oty19ztbkvgwn4u7rbkc0xocsgeb1lgomnht3iotcqnvsca3nwbuogbazqv96aq9o1k2nlstxfpxwo6hcfwcq1xncszc3het3uwda9685aesjdfkmps4njiiuvj1uv1wf5egn6zmh4mj9szbvf66m8g8720egtea6za58cdkskhblo4c26o485fbvke2glvzn776wc5wo33iqk6t4usa895qledrlqg3083ult93028yy2rly3xwsf4c2hrpqgokrrvgn59uhvi5w0okz5y0qzo1hq1npfrpvhi2a3cyf71q 00:06:23.916 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:23.916 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:23.916 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:23.916 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.916 [2024-11-26 19:14:22.314154] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:23.916 [2024-11-26 19:14:22.314245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59856 ] 00:06:23.916 { 00:06:23.916 "subsystems": [ 00:06:23.916 { 00:06:23.916 "subsystem": "bdev", 00:06:23.916 "config": [ 00:06:23.916 { 00:06:23.916 "params": { 00:06:23.916 "trtype": "pcie", 00:06:23.916 "traddr": "0000:00:10.0", 00:06:23.916 "name": "Nvme0" 00:06:23.916 }, 00:06:23.916 "method": "bdev_nvme_attach_controller" 00:06:23.916 }, 00:06:23.916 { 00:06:23.916 "method": "bdev_wait_for_examine" 00:06:23.916 } 00:06:23.916 ] 00:06:23.916 } 00:06:23.916 ] 00:06:23.916 } 00:06:24.175 [2024-11-26 19:14:22.461740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.175 [2024-11-26 19:14:22.508219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.175 [2024-11-26 19:14:22.563099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.435  [2024-11-26T19:14:22.875Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:24.435 00:06:24.435 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:24.435 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:24.435 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:24.435 19:14:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:24.694 [2024-11-26 19:14:22.898213] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:24.694 [2024-11-26 19:14:22.898297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:06:24.694 { 00:06:24.694 "subsystems": [ 00:06:24.694 { 00:06:24.694 "subsystem": "bdev", 00:06:24.694 "config": [ 00:06:24.694 { 00:06:24.694 "params": { 00:06:24.694 "trtype": "pcie", 00:06:24.694 "traddr": "0000:00:10.0", 00:06:24.694 "name": "Nvme0" 00:06:24.694 }, 00:06:24.694 "method": "bdev_nvme_attach_controller" 00:06:24.694 }, 00:06:24.694 { 00:06:24.694 "method": "bdev_wait_for_examine" 00:06:24.694 } 00:06:24.694 ] 00:06:24.694 } 00:06:24.694 ] 00:06:24.694 } 00:06:24.694 [2024-11-26 19:14:23.037827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.694 [2024-11-26 19:14:23.086111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.954 [2024-11-26 19:14:23.142281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.954  [2024-11-26T19:14:23.653Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:25.213 00:06:25.213 19:14:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 95kut7jzpg6xqargdbn5bqt7likdldd3tmx1c5o2glr3oqi30ajnsj8w1170l6tlupte92eh13qlk3f38byofwc8doi7ccz1wc21inshl9ruujsskfcxibgysnfwxkrfmguaymzbrtuxd0ofvuphwwqgy5bf5sw7kahj4q2ncrh2m4ikf2nydf078lv16wegldeu715mf4426n9hze7ld9pfgrgbkwr4ramnks0v36puil4g3zwp12w6rr4cbtn2zdgkw0e4kkwkme8demvcto45iabormymgxreokszzmko07h13plj5j9g3ywubqsd08jp4hn7a6wpq34lmmc16urc0059zw927sy4afyr79tlktnv0hsnyjaer82g00cupqyvrifv2wxgdidzmrvl3fgmoj9ipvc43qntd7xpvduopinij5jdml0tmst8h23znmhooyn687p6mgripexz2cbejfumikml578p3eryaxh5o2m4mzgaofnmd4xosie26hxz66gt23yn3dvss5oslscxrjbe57l90y6kuxhkwjmes1hz0i486s8mkv47jh17q8zrj4v2oka0d0r9opt4ayscn7vy9gbmvszk3c1ofthggev6dpkbzslh5nocc3zhycok1ol0xwtus3gtsn3sa4vvl15x78uxkyhbh81hexgijact23zmltroxbl6jepthqrelqwi5i4wzhh9owri9ekp8du5mpuujhkprtpzyuqjfhvqwmht2xf9u46n59hynd5238jnuubxi3n6uu3o64ibf2wzskuog8s0lsga1ljfyrz63skuyklsky5f3f77sg6v7fzg88y3jb36se7t5zejf74n22cnyh6ddkeqjm85qy3v3sevz21xc2r83zf44tqqon1v9yvfjsk62cfvukntz7byxzxu0u5q1ct1pkgorkq19fo98wgltcuc8b1avkr1rjb3rn8xympy6nsojjzrly7l0pvx463wdkkocoubo550pd39xvxmqrcuz2gvrevg2jg8u59dx6bjk82cnzr8qsen86w8yktn49yvwn00ahuc8tml62hloo2ixaqb7y3argt7q0luqj3xm73e2xvxapxzrbqtqgyv6mfsvet5apt73pv1ium2lbdjzqrf7ck5g0xgrrwopa292l7ifemiguxde6cip8w0hltdusi4jnqbhiu5dd0y2qu0xhoxuqv9gimu1k0roob5iu07rvbdt7643nwyhmt236hpa1r20rvqjfw21s4egyc3ff28dtqbp85tyrad8o0hzxq55fekenymz5m0273j8aeh9ykn64w7yk1ui6o24hzicwlmz7yfb3gy9gdy6cafdzqhzps3pvhw7dfqh99j1hlxw1qqfl2iten102mcbpp518cgvdbx3cm3s34ez9t7ihvtfh2r2tqipee6hrfa7qsvl79j6imjlob729bgprsmvrlpk1u171s1txh0di03uoaeumnoy761csrkp40bxlfnb6p1oam0orw6c2xg1ohgfnzzwn5ijdgjsmspi9unmnsfmkma29ih3kekpsq7gq8u9ox7jb4e3ph1bbzjysfd7w28ccqz44mnh7m97hxlu7mjwxp4utclkj532nl8uzpi3xfhcaywty1g3xlhuep6kc34s0bfrzhkars04jcaekq7rk78dsmx7yqrto01nsmwuyv0k9tlmp6amqbzkxuotbqzcjimy8bfnyed0kz8skfids4uwxobz4n0i1r4b3myb6ew3hcm691lbrgf8tl3z0q5r860neu9ebqi2uiuyab9413atdxd6b2xt9dsdpezzbbmgf4lwplmndu96ynbe4cs4u9y00n7cs0tle3l9ovzi17z8mz3mbihv8bciy9vcqossw62l0hjgnl5kbwumcajrs8l4eef3j4174lupy1axdxd9iqksuizod38eruwm1trhmpw9nvv2g498uj4sn73rmdvogx5tiz656i3enegmhjieure57rhgbshoh58nfcr2iynqc03g8cl05psareqq10xdol9e93zzbob9k13owc1kftjf6gsph70pyttr8mibx0tkvt8n36jwz9qyws186zcrapziiwlp17lnvceg4tanseuow0ytmeszvwolaonzga4n67rtp2ah24xrv5lc17velgo8mftg2cgyh5odtkh17shnir72uuex2yp3kzaug1d7k0ox57zc93772mc33734j0qrt9xz9triii0jakjvwtn3yy4ox658puk7medtvc6lu57j3hekfuh267jebl29wglc1y2468c6myndyig03dd4mft6rb76rglrdlttsqtpgd6dvestaatli29rr4vd7mhq88blp5c2agdlhcg9xjv6zjtx73ngt5qneme0zgaxis6lrv1pq3vu50l5x383b1ixom92z5afx4qw9bul7ctfee8xs5gqh46twvaifio93okhav9jvjzdhk2hqw2zhu83iw8abby6fy1vit82um7o927t492aq175fqpoa5w6ju8ugnt8y1h28z35na8bcchrwd1uswgy05qyij341lgn1ku1f22q3949zuepmuttiwdth3dkwrj6ao133zezeam4buizhszwvec6j70hvtlfetpuntbolzwbxax6jopui9cwi6uqwumjew0fes62bdldfxg8uk6j53j4avjwv685zgsst4792kj0wdf0xbjp4azblxwm3h2uh1c1pd2fy0ixchb2068gd87z35grad7ldy2us7zyn4a97zqq8x3f3541gmwtmts9t5ixfrhggxngk603jtwd3ny55fk6w57q32o5buvpsfzd1boft2uiyea4640bn21iwf6a1i7mozl7dib8ek8lbmp0hb10jtldvguwn0yx3x5yz42x4w137bczsdixr9ih7954j03nsc1e2etwi2nkr09hczwwa14pa6krwm8ctrj7fr8p40unxub07ni64lykoiqwoeapanfekcoxg0le4ndqddl70r89bycrdw0aa9h4sgutpsi3dx0zx53sl9cm5643do10jsq5ucwdtdcdexkf68dmf4llhrucp6i8wsjtf9skyq6ewt84w6q0lyh8u8d0bjmf0hdxdavlgslfk5fuz5w5d77tl1m4n2x68qxdrbhdw5lbl5xs870nizj3sreplgl8tsykpwljd0x1egb8y4w2sufd2mf4opy2ss9sddinkln496grxdnih4ltbak2mt8jqflprtwxow6oomz87nvbjl3y6bl6ctsgaptrnsrppiarmbcuyoigj9i88cgy80h9ubzx20utbwv26nli0eez4vveff9bd3vq35hjv0y7dmaye2g8gs157q76dlnlzphl9lb2qwwqgmep97auq2cbo1vbhpb4tl4x77o713hdhwb749qldbdcxcx43juvse8edljgk6h8hbu5bni1b76ilgn1o7f8an1wz9wd1nyhz8gv7bieppv19nuo260p6deodiepxyf47m8yywt7wl9sxl5x9uhoxvkvwze93f8tw91oo3wqyn20emzwn5ihbsca0lqyd66qeahtl0isg1whxszuq6vvf3m8tuy7xehxlquzemwhacm8yeb5vvl7k9b2fmgmrg381uf7mlp6k4w4fbdltul8oan1psbv5qdw8j9j20vxwz2txms6491z1u8e90nmyvjc60z562xdlu853j07n90df7uva9xvrfzipiyehg3qcosrntmsri1t34uex7gdo5fuijn0d15oty19ztbkvgwn4u7rbkc0xocsgeb1lgomnht3iotcqnvsca3nwbuogbazqv96aq9o1k2nlstxfpxwo6hcfwcq1xncszc3het3uwda9685aesjdfkmps4njiiuvj1uv1wf5egn6zmh4mj9szbvf66m8g8720egtea6za58cdkskhblo4c26o485fbvke2glvzn776wc5wo33iqk6t4usa895qledrlqg3083ult93028yy2rly3xwsf4c2hrpqgokrrvgn59uhvi5w0okz5y0qzo1hq1npfrpvhi2a3cyf71q == \9\5\k\u\t\7\j\z\p\g\6\x\q\a\r\g\d\b\n\5\b\q\t\7\l\i\k\d\l\d\d\3\t\m\x\1\c\5\o\2\g\l\r\3\o\q\i\3\0\a\j\n\s\j\8\w\1\1\7\0\l\6\t\l\u\p\t\e\9\2\e\h\1\3\q\l\k\3\f\3\8\b\y\o\f\w\c\8\d\o\i\7\c\c\z\1\w\c\2\1\i\n\s\h\l\9\r\u\u\j\s\s\k\f\c\x\i\b\g\y\s\n\f\w\x\k\r\f\m\g\u\a\y\m\z\b\r\t\u\x\d\0\o\f\v\u\p\h\w\w\q\g\y\5\b\f\5\s\w\7\k\a\h\j\4\q\2\n\c\r\h\2\m\4\i\k\f\2\n\y\d\f\0\7\8\l\v\1\6\w\e\g\l\d\e\u\7\1\5\m\f\4\4\2\6\n\9\h\z\e\7\l\d\9\p\f\g\r\g\b\k\w\r\4\r\a\m\n\k\s\0\v\3\6\p\u\i\l\4\g\3\z\w\p\1\2\w\6\r\r\4\c\b\t\n\2\z\d\g\k\w\0\e\4\k\k\w\k\m\e\8\d\e\m\v\c\t\o\4\5\i\a\b\o\r\m\y\m\g\x\r\e\o\k\s\z\z\m\k\o\0\7\h\1\3\p\l\j\5\j\9\g\3\y\w\u\b\q\s\d\0\8\j\p\4\h\n\7\a\6\w\p\q\3\4\l\m\m\c\1\6\u\r\c\0\0\5\9\z\w\9\2\7\s\y\4\a\f\y\r\7\9\t\l\k\t\n\v\0\h\s\n\y\j\a\e\r\8\2\g\0\0\c\u\p\q\y\v\r\i\f\v\2\w\x\g\d\i\d\z\m\r\v\l\3\f\g\m\o\j\9\i\p\v\c\4\3\q\n\t\d\7\x\p\v\d\u\o\p\i\n\i\j\5\j\d\m\l\0\t\m\s\t\8\h\2\3\z\n\m\h\o\o\y\n\6\8\7\p\6\m\g\r\i\p\e\x\z\2\c\b\e\j\f\u\m\i\k\m\l\5\7\8\p\3\e\r\y\a\x\h\5\o\2\m\4\m\z\g\a\o\f\n\m\d\4\x\o\s\i\e\2\6\h\x\z\6\6\g\t\2\3\y\n\3\d\v\s\s\5\o\s\l\s\c\x\r\j\b\e\5\7\l\9\0\y\6\k\u\x\h\k\w\j\m\e\s\1\h\z\0\i\4\8\6\s\8\m\k\v\4\7\j\h\1\7\q\8\z\r\j\4\v\2\o\k\a\0\d\0\r\9\o\p\t\4\a\y\s\c\n\7\v\y\9\g\b\m\v\s\z\k\3\c\1\o\f\t\h\g\g\e\v\6\d\p\k\b\z\s\l\h\5\n\o\c\c\3\z\h\y\c\o\k\1\o\l\0\x\w\t\u\s\3\g\t\s\n\3\s\a\4\v\v\l\1\5\x\7\8\u\x\k\y\h\b\h\8\1\h\e\x\g\i\j\a\c\t\2\3\z\m\l\t\r\o\x\b\l\6\j\e\p\t\h\q\r\e\l\q\w\i\5\i\4\w\z\h\h\9\o\w\r\i\9\e\k\p\8\d\u\5\m\p\u\u\j\h\k\p\r\t\p\z\y\u\q\j\f\h\v\q\w\m\h\t\2\x\f\9\u\4\6\n\5\9\h\y\n\d\5\2\3\8\j\n\u\u\b\x\i\3\n\6\u\u\3\o\6\4\i\b\f\2\w\z\s\k\u\o\g\8\s\0\l\s\g\a\1\l\j\f\y\r\z\6\3\s\k\u\y\k\l\s\k\y\5\f\3\f\7\7\s\g\6\v\7\f\z\g\8\8\y\3\j\b\3\6\s\e\7\t\5\z\e\j\f\7\4\n\2\2\c\n\y\h\6\d\d\k\e\q\j\m\8\5\q\y\3\v\3\s\e\v\z\2\1\x\c\2\r\8\3\z\f\4\4\t\q\q\o\n\1\v\9\y\v\f\j\s\k\6\2\c\f\v\u\k\n\t\z\7\b\y\x\z\x\u\0\u\5\q\1\c\t\1\p\k\g\o\r\k\q\1\9\f\o\9\8\w\g\l\t\c\u\c\8\b\1\a\v\k\r\1\r\j\b\3\r\n\8\x\y\m\p\y\6\n\s\o\j\j\z\r\l\y\7\l\0\p\v\x\4\6\3\w\d\k\k\o\c\o\u\b\o\5\5\0\p\d\3\9\x\v\x\m\q\r\c\u\z\2\g\v\r\e\v\g\2\j\g\8\u\5\9\d\x\6\b\j\k\8\2\c\n\z\r\8\q\s\e\n\8\6\w\8\y\k\t\n\4\9\y\v\w\n\0\0\a\h\u\c\8\t\m\l\6\2\h\l\o\o\2\i\x\a\q\b\7\y\3\a\r\g\t\7\q\0\l\u\q\j\3\x\m\7\3\e\2\x\v\x\a\p\x\z\r\b\q\t\q\g\y\v\6\m\f\s\v\e\t\5\a\p\t\7\3\p\v\1\i\u\m\2\l\b\d\j\z\q\r\f\7\c\k\5\g\0\x\g\r\r\w\o\p\a\2\9\2\l\7\i\f\e\m\i\g\u\x\d\e\6\c\i\p\8\w\0\h\l\t\d\u\s\i\4\j\n\q\b\h\i\u\5\d\d\0\y\2\q\u\0\x\h\o\x\u\q\v\9\g\i\m\u\1\k\0\r\o\o\b\5\i\u\0\7\r\v\b\d\t\7\6\4\3\n\w\y\h\m\t\2\3\6\h\p\a\1\r\2\0\r\v\q\j\f\w\2\1\s\4\e\g\y\c\3\f\f\2\8\d\t\q\b\p\8\5\t\y\r\a\d\8\o\0\h\z\x\q\5\5\f\e\k\e\n\y\m\z\5\m\0\2\7\3\j\8\a\e\h\9\y\k\n\6\4\w\7\y\k\1\u\i\6\o\2\4\h\z\i\c\w\l\m\z\7\y\f\b\3\g\y\9\g\d\y\6\c\a\f\d\z\q\h\z\p\s\3\p\v\h\w\7\d\f\q\h\9\9\j\1\h\l\x\w\1\q\q\f\l\2\i\t\e\n\1\0\2\m\c\b\p\p\5\1\8\c\g\v\d\b\x\3\c\m\3\s\3\4\e\z\9\t\7\i\h\v\t\f\h\2\r\2\t\q\i\p\e\e\6\h\r\f\a\7\q\s\v\l\7\9\j\6\i\m\j\l\o\b\7\2\9\b\g\p\r\s\m\v\r\l\p\k\1\u\1\7\1\s\1\t\x\h\0\d\i\0\3\u\o\a\e\u\m\n\o\y\7\6\1\c\s\r\k\p\4\0\b\x\l\f\n\b\6\p\1\o\a\m\0\o\r\w\6\c\2\x\g\1\o\h\g\f\n\z\z\w\n\5\i\j\d\g\j\s\m\s\p\i\9\u\n\m\n\s\f\m\k\m\a\2\9\i\h\3\k\e\k\p\s\q\7\g\q\8\u\9\o\x\7\j\b\4\e\3\p\h\1\b\b\z\j\y\s\f\d\7\w\2\8\c\c\q\z\4\4\m\n\h\7\m\9\7\h\x\l\u\7\m\j\w\x\p\4\u\t\c\l\k\j\5\3\2\n\l\8\u\z\p\i\3\x\f\h\c\a\y\w\t\y\1\g\3\x\l\h\u\e\p\6\k\c\3\4\s\0\b\f\r\z\h\k\a\r\s\0\4\j\c\a\e\k\q\7\r\k\7\8\d\s\m\x\7\y\q\r\t\o\0\1\n\s\m\w\u\y\v\0\k\9\t\l\m\p\6\a\m\q\b\z\k\x\u\o\t\b\q\z\c\j\i\m\y\8\b\f\n\y\e\d\0\k\z\8\s\k\f\i\d\s\4\u\w\x\o\b\z\4\n\0\i\1\r\4\b\3\m\y\b\6\e\w\3\h\c\m\6\9\1\l\b\r\g\f\8\t\l\3\z\0\q\5\r\8\6\0\n\e\u\9\e\b\q\i\2\u\i\u\y\a\b\9\4\1\3\a\t\d\x\d\6\b\2\x\t\9\d\s\d\p\e\z\z\b\b\m\g\f\4\l\w\p\l\m\n\d\u\9\6\y\n\b\e\4\c\s\4\u\9\y\0\0\n\7\c\s\0\t\l\e\3\l\9\o\v\z\i\1\7\z\8\m\z\3\m\b\i\h\v\8\b\c\i\y\9\v\c\q\o\s\s\w\6\2\l\0\h\j\g\n\l\5\k\b\w\u\m\c\a\j\r\s\8\l\4\e\e\f\3\j\4\1\7\4\l\u\p\y\1\a\x\d\x\d\9\i\q\k\s\u\i\z\o\d\3\8\e\r\u\w\m\1\t\r\h\m\p\w\9\n\v\v\2\g\4\9\8\u\j\4\s\n\7\3\r\m\d\v\o\g\x\5\t\i\z\6\5\6\i\3\e\n\e\g\m\h\j\i\e\u\r\e\5\7\r\h\g\b\s\h\o\h\5\8\n\f\c\r\2\i\y\n\q\c\0\3\g\8\c\l\0\5\p\s\a\r\e\q\q\1\0\x\d\o\l\9\e\9\3\z\z\b\o\b\9\k\1\3\o\w\c\1\k\f\t\j\f\6\g\s\p\h\7\0\p\y\t\t\r\8\m\i\b\x\0\t\k\v\t\8\n\3\6\j\w\z\9\q\y\w\s\1\8\6\z\c\r\a\p\z\i\i\w\l\p\1\7\l\n\v\c\e\g\4\t\a\n\s\e\u\o\w\0\y\t\m\e\s\z\v\w\o\l\a\o\n\z\g\a\4\n\6\7\r\t\p\2\a\h\2\4\x\r\v\5\l\c\1\7\v\e\l\g\o\8\m\f\t\g\2\c\g\y\h\5\o\d\t\k\h\1\7\s\h\n\i\r\7\2\u\u\e\x\2\y\p\3\k\z\a\u\g\1\d\7\k\0\o\x\5\7\z\c\9\3\7\7\2\m\c\3\3\7\3\4\j\0\q\r\t\9\x\z\9\t\r\i\i\i\0\j\a\k\j\v\w\t\n\3\y\y\4\o\x\6\5\8\p\u\k\7\m\e\d\t\v\c\6\l\u\5\7\j\3\h\e\k\f\u\h\2\6\7\j\e\b\l\2\9\w\g\l\c\1\y\2\4\6\8\c\6\m\y\n\d\y\i\g\0\3\d\d\4\m\f\t\6\r\b\7\6\r\g\l\r\d\l\t\t\s\q\t\p\g\d\6\d\v\e\s\t\a\a\t\l\i\2\9\r\r\4\v\d\7\m\h\q\8\8\b\l\p\5\c\2\a\g\d\l\h\c\g\9\x\j\v\6\z\j\t\x\7\3\n\g\t\5\q\n\e\m\e\0\z\g\a\x\i\s\6\l\r\v\1\p\q\3\v\u\5\0\l\5\x\3\8\3\b\1\i\x\o\m\9\2\z\5\a\f\x\4\q\w\9\b\u\l\7\c\t\f\e\e\8\x\s\5\g\q\h\4\6\t\w\v\a\i\f\i\o\9\3\o\k\h\a\v\9\j\v\j\z\d\h\k\2\h\q\w\2\z\h\u\8\3\i\w\8\a\b\b\y\6\f\y\1\v\i\t\8\2\u\m\7\o\9\2\7\t\4\9\2\a\q\1\7\5\f\q\p\o\a\5\w\6\j\u\8\u\g\n\t\8\y\1\h\2\8\z\3\5\n\a\8\b\c\c\h\r\w\d\1\u\s\w\g\y\0\5\q\y\i\j\3\4\1\l\g\n\1\k\u\1\f\2\2\q\3\9\4\9\z\u\e\p\m\u\t\t\i\w\d\t\h\3\d\k\w\r\j\6\a\o\1\3\3\z\e\z\e\a\m\4\b\u\i\z\h\s\z\w\v\e\c\6\j\7\0\h\v\t\l\f\e\t\p\u\n\t\b\o\l\z\w\b\x\a\x\6\j\o\p\u\i\9\c\w\i\6\u\q\w\u\m\j\e\w\0\f\e\s\6\2\b\d\l\d\f\x\g\8\u\k\6\j\5\3\j\4\a\v\j\w\v\6\8\5\z\g\s\s\t\4\7\9\2\k\j\0\w\d\f\0\x\b\j\p\4\a\z\b\l\x\w\m\3\h\2\u\h\1\c\1\p\d\2\f\y\0\i\x\c\h\b\2\0\6\8\g\d\8\7\z\3\5\g\r\a\d\7\l\d\y\2\u\s\7\z\y\n\4\a\9\7\z\q\q\8\x\3\f\3\5\4\1\g\m\w\t\m\t\s\9\t\5\i\x\f\r\h\g\g\x\n\g\k\6\0\3\j\t\w\d\3\n\y\5\5\f\k\6\w\5\7\q\3\2\o\5\b\u\v\p\s\f\z\d\1\b\o\f\t\2\u\i\y\e\a\4\6\4\0\b\n\2\1\i\w\f\6\a\1\i\7\m\o\z\l\7\d\i\b\8\e\k\8\l\b\m\p\0\h\b\1\0\j\t\l\d\v\g\u\w\n\0\y\x\3\x\5\y\z\4\2\x\4\w\1\3\7\b\c\z\s\d\i\x\r\9\i\h\7\9\5\4\j\0\3\n\s\c\1\e\2\e\t\w\i\2\n\k\r\0\9\h\c\z\w\w\a\1\4\p\a\6\k\r\w\m\8\c\t\r\j\7\f\r\8\p\4\0\u\n\x\u\b\0\7\n\i\6\4\l\y\k\o\i\q\w\o\e\a\p\a\n\f\e\k\c\o\x\g\0\l\e\4\n\d\q\d\d\l\7\0\r\8\9\b\y\c\r\d\w\0\a\a\9\h\4\s\g\u\t\p\s\i\3\d\x\0\z\x\5\3\s\l\9\c\m\5\6\4\3\d\o\1\0\j\s\q\5\u\c\w\d\t\d\c\d\e\x\k\f\6\8\d\m\f\4\l\l\h\r\u\c\p\6\i\8\w\s\j\t\f\9\s\k\y\q\6\e\w\t\8\4\w\6\q\0\l\y\h\8\u\8\d\0\b\j\m\f\0\h\d\x\d\a\v\l\g\s\l\f\k\5\f\u\z\5\w\5\d\7\7\t\l\1\m\4\n\2\x\6\8\q\x\d\r\b\h\d\w\5\l\b\l\5\x\s\8\7\0\n\i\z\j\3\s\r\e\p\l\g\l\8\t\s\y\k\p\w\l\j\d\0\x\1\e\g\b\8\y\4\w\2\s\u\f\d\2\m\f\4\o\p\y\2\s\s\9\s\d\d\i\n\k\l\n\4\9\6\g\r\x\d\n\i\h\4\l\t\b\a\k\2\m\t\8\j\q\f\l\p\r\t\w\x\o\w\6\o\o\m\z\8\7\n\v\b\j\l\3\y\6\b\l\6\c\t\s\g\a\p\t\r\n\s\r\p\p\i\a\r\m\b\c\u\y\o\i\g\j\9\i\8\8\c\g\y\8\0\h\9\u\b\z\x\2\0\u\t\b\w\v\2\6\n\l\i\0\e\e\z\4\v\v\e\f\f\9\b\d\3\v\q\3\5\h\j\v\0\y\7\d\m\a\y\e\2\g\8\g\s\1\5\7\q\7\6\d\l\n\l\z\p\h\l\9\l\b\2\q\w\w\q\g\m\e\p\9\7\a\u\q\2\c\b\o\1\v\b\h\p\b\4\t\l\4\x\7\7\o\7\1\3\h\d\h\w\b\7\4\9\q\l\d\b\d\c\x\c\x\4\3\j\u\v\s\e\8\e\d\l\j\g\k\6\h\8\h\b\u\5\b\n\i\1\b\7\6\i\l\g\n\1\o\7\f\8\a\n\1\w\z\9\w\d\1\n\y\h\z\8\g\v\7\b\i\e\p\p\v\1\9\n\u\o\2\6\0\p\6\d\e\o\d\i\e\p\x\y\f\4\7\m\8\y\y\w\t\7\w\l\9\s\x\l\5\x\9\u\h\o\x\v\k\v\w\z\e\9\3\f\8\t\w\9\1\o\o\3\w\q\y\n\2\0\e\m\z\w\n\5\i\h\b\s\c\a\0\l\q\y\d\6\6\q\e\a\h\t\l\0\i\s\g\1\w\h\x\s\z\u\q\6\v\v\f\3\m\8\t\u\y\7\x\e\h\x\l\q\u\z\e\m\w\h\a\c\m\8\y\e\b\5\v\v\l\7\k\9\b\2\f\m\g\m\r\g\3\8\1\u\f\7\m\l\p\6\k\4\w\4\f\b\d\l\t\u\l\8\o\a\n\1\p\s\b\v\5\q\d\w\8\j\9\j\2\0\v\x\w\z\2\t\x\m\s\6\4\9\1\z\1\u\8\e\9\0\n\m\y\v\j\c\6\0\z\5\6\2\x\d\l\u\8\5\3\j\0\7\n\9\0\d\f\7\u\v\a\9\x\v\r\f\z\i\p\i\y\e\h\g\3\q\c\o\s\r\n\t\m\s\r\i\1\t\3\4\u\e\x\7\g\d\o\5\f\u\i\j\n\0\d\1\5\o\t\y\1\9\z\t\b\k\v\g\w\n\4\u\7\r\b\k\c\0\x\o\c\s\g\e\b\1\l\g\o\m\n\h\t\3\i\o\t\c\q\n\v\s\c\a\3\n\w\b\u\o\g\b\a\z\q\v\9\6\a\q\9\o\1\k\2\n\l\s\t\x\f\p\x\w\o\6\h\c\f\w\c\q\1\x\n\c\s\z\c\3\h\e\t\3\u\w\d\a\9\6\8\5\a\e\s\j\d\f\k\m\p\s\4\n\j\i\i\u\v\j\1\u\v\1\w\f\5\e\g\n\6\z\m\h\4\m\j\9\s\z\b\v\f\6\6\m\8\g\8\7\2\0\e\g\t\e\a\6\z\a\5\8\c\d\k\s\k\h\b\l\o\4\c\2\6\o\4\8\5\f\b\v\k\e\2\g\l\v\z\n\7\7\6\w\c\5\w\o\3\3\i\q\k\6\t\4\u\s\a\8\9\5\q\l\e\d\r\l\q\g\3\0\8\3\u\l\t\9\3\0\2\8\y\y\2\r\l\y\3\x\w\s\f\4\c\2\h\r\p\q\g\o\k\r\r\v\g\n\5\9\u\h\v\i\5\w\0\o\k\z\5\y\0\q\z\o\1\h\q\1\n\p\f\r\p\v\h\i\2\a\3\c\y\f\7\1\q ]] 00:06:25.214 00:06:25.214 real 0m1.227s 00:06:25.214 user 0m0.829s 00:06:25.214 sys 0m0.583s 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:25.214 ************************************ 00:06:25.214 END TEST dd_rw_offset 00:06:25.214 ************************************ 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.214 19:14:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.214 [2024-11-26 19:14:23.538983] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:25.214 [2024-11-26 19:14:23.539091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:06:25.214 { 00:06:25.214 "subsystems": [ 00:06:25.214 { 00:06:25.214 "subsystem": "bdev", 00:06:25.214 "config": [ 00:06:25.214 { 00:06:25.214 "params": { 00:06:25.214 "trtype": "pcie", 00:06:25.214 "traddr": "0000:00:10.0", 00:06:25.214 "name": "Nvme0" 00:06:25.214 }, 00:06:25.214 "method": "bdev_nvme_attach_controller" 00:06:25.214 }, 00:06:25.214 { 00:06:25.214 "method": "bdev_wait_for_examine" 00:06:25.214 } 00:06:25.214 ] 00:06:25.214 } 00:06:25.214 ] 00:06:25.214 } 00:06:25.473 [2024-11-26 19:14:23.685777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.474 [2024-11-26 19:14:23.731923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.474 [2024-11-26 19:14:23.785700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.474  [2024-11-26T19:14:24.173Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:25.733 00:06:25.733 19:14:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.733 00:06:25.733 real 0m16.984s 00:06:25.733 user 0m12.002s 00:06:25.733 sys 0m6.650s 00:06:25.733 19:14:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.733 19:14:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 END TEST spdk_dd_basic_rw 00:06:25.733 ************************************ 00:06:25.733 19:14:24 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:25.733 19:14:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.733 19:14:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.733 19:14:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 START TEST spdk_dd_posix 00:06:25.733 ************************************ 00:06:25.733 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:25.993 * Looking for test storage... 00:06:25.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.993 --rc genhtml_branch_coverage=1 00:06:25.993 --rc genhtml_function_coverage=1 00:06:25.993 --rc genhtml_legend=1 00:06:25.993 --rc geninfo_all_blocks=1 00:06:25.993 --rc geninfo_unexecuted_blocks=1 00:06:25.993 00:06:25.993 ' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.993 --rc genhtml_branch_coverage=1 00:06:25.993 --rc genhtml_function_coverage=1 00:06:25.993 --rc genhtml_legend=1 00:06:25.993 --rc geninfo_all_blocks=1 00:06:25.993 --rc geninfo_unexecuted_blocks=1 00:06:25.993 00:06:25.993 ' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.993 --rc genhtml_branch_coverage=1 00:06:25.993 --rc genhtml_function_coverage=1 00:06:25.993 --rc genhtml_legend=1 00:06:25.993 --rc geninfo_all_blocks=1 00:06:25.993 --rc geninfo_unexecuted_blocks=1 00:06:25.993 00:06:25.993 ' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.993 --rc genhtml_branch_coverage=1 00:06:25.993 --rc genhtml_function_coverage=1 00:06:25.993 --rc genhtml_legend=1 00:06:25.993 --rc geninfo_all_blocks=1 00:06:25.993 --rc geninfo_unexecuted_blocks=1 00:06:25.993 00:06:25.993 ' 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:25.993 19:14:24 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:25.994 * First test run, liburing in use 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.994 ************************************ 00:06:25.994 START TEST dd_flag_append 00:06:25.994 ************************************ 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=rts404zootjsp8jj7rgm2ucb0k8x1s0s 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=svfgk2o4dvhtlugdm9vqmlh9h0vy0jli 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s rts404zootjsp8jj7rgm2ucb0k8x1s0s 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s svfgk2o4dvhtlugdm9vqmlh9h0vy0jli 00:06:25.994 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:25.994 [2024-11-26 19:14:24.400986] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:25.994 [2024-11-26 19:14:24.401080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:06:26.253 [2024-11-26 19:14:24.546354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.253 [2024-11-26 19:14:24.593846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.253 [2024-11-26 19:14:24.646391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.253  [2024-11-26T19:14:24.952Z] Copying: 32/32 [B] (average 31 kBps) 00:06:26.512 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ svfgk2o4dvhtlugdm9vqmlh9h0vy0jlirts404zootjsp8jj7rgm2ucb0k8x1s0s == \s\v\f\g\k\2\o\4\d\v\h\t\l\u\g\d\m\9\v\q\m\l\h\9\h\0\v\y\0\j\l\i\r\t\s\4\0\4\z\o\o\t\j\s\p\8\j\j\7\r\g\m\2\u\c\b\0\k\8\x\1\s\0\s ]] 00:06:26.512 00:06:26.512 real 0m0.511s 00:06:26.512 user 0m0.267s 00:06:26.512 sys 0m0.265s 00:06:26.512 ************************************ 00:06:26.512 END TEST dd_flag_append 00:06:26.512 ************************************ 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.512 ************************************ 00:06:26.512 START TEST dd_flag_directory 00:06:26.512 ************************************ 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.512 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.513 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.513 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.513 19:14:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.513 [2024-11-26 19:14:24.946373] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:26.513 [2024-11-26 19:14:24.946452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:06:26.772 [2024-11-26 19:14:25.086788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.772 [2024-11-26 19:14:25.134962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.772 [2024-11-26 19:14:25.186736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.094 [2024-11-26 19:14:25.219712] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.094 [2024-11-26 19:14:25.219761] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.094 [2024-11-26 19:14:25.219793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.094 [2024-11-26 19:14:25.335464] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.094 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.094 [2024-11-26 19:14:25.461864] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:27.094 [2024-11-26 19:14:25.462000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60009 ] 00:06:27.360 [2024-11-26 19:14:25.610854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.360 [2024-11-26 19:14:25.655630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.360 [2024-11-26 19:14:25.710083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.360 [2024-11-26 19:14:25.743916] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.360 [2024-11-26 19:14:25.743980] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.360 [2024-11-26 19:14:25.744013] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.619 [2024-11-26 19:14:25.855485] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.619 00:06:27.619 real 0m1.013s 00:06:27.619 user 0m0.539s 00:06:27.619 sys 0m0.265s 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:27.619 ************************************ 00:06:27.619 END TEST dd_flag_directory 00:06:27.619 ************************************ 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.619 ************************************ 00:06:27.619 START TEST dd_flag_nofollow 00:06:27.619 ************************************ 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.619 19:14:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.619 [2024-11-26 19:14:26.023421] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:27.619 [2024-11-26 19:14:26.024045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:06:27.880 [2024-11-26 19:14:26.168879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.880 [2024-11-26 19:14:26.208316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.880 [2024-11-26 19:14:26.259444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.880 [2024-11-26 19:14:26.290640] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.880 [2024-11-26 19:14:26.290689] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.880 [2024-11-26 19:14:26.290720] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.139 [2024-11-26 19:14:26.408901] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.139 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.139 [2024-11-26 19:14:26.518063] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:28.139 [2024-11-26 19:14:26.518152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:06:28.398 [2024-11-26 19:14:26.661838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.398 [2024-11-26 19:14:26.705998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.398 [2024-11-26 19:14:26.757691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.398 [2024-11-26 19:14:26.790367] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:28.398 [2024-11-26 19:14:26.790429] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:28.398 [2024-11-26 19:14:26.790463] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.658 [2024-11-26 19:14:26.901020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:28.658 19:14:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.658 [2024-11-26 19:14:27.011397] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:28.658 [2024-11-26 19:14:27.011508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:06:28.917 [2024-11-26 19:14:27.151965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.917 [2024-11-26 19:14:27.197288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.917 [2024-11-26 19:14:27.249183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.917  [2024-11-26T19:14:27.617Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.177 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ behekrqlb1v8c7idy07us7jlxb5hzdx59aobqmhoh4rrnfxoz8s2gne2lgjm56h9iivd0jpheaps7cyp3iaqipmocu7ljtokzrtaacfuabdyolq3pjvk1vq6nmg2f5dk8pwi4o544f0sdcod5uc7nuo922bdbr5swi0hvxe1qkaeup46ai0aho6zrdy5c27235fcej4mcgk3qwk5kwtplgujf8ks7bhibwb68tjo8e1v7l2is61phymfktiturdczamn2x2xf0psh5d0hnkfj678g3ufu3q5n1pq1mjh6nlt494yg6s4uo5ldqzi9xmf44maxosta5xonco94y8kt5hsi10vxrf6171gc98gd4oj3m9wi5erhoze62vztuz9z36p5bvi1h6lii5o78b8y4yuh7w4vflyxehf3myypoufckvy5x9pwc1hcq25ioz06wbdr6k19szvz127pz1yzm2wq9nowybykaf50l5p8sda61ehsywroepmrj9sqbhn == \b\e\h\e\k\r\q\l\b\1\v\8\c\7\i\d\y\0\7\u\s\7\j\l\x\b\5\h\z\d\x\5\9\a\o\b\q\m\h\o\h\4\r\r\n\f\x\o\z\8\s\2\g\n\e\2\l\g\j\m\5\6\h\9\i\i\v\d\0\j\p\h\e\a\p\s\7\c\y\p\3\i\a\q\i\p\m\o\c\u\7\l\j\t\o\k\z\r\t\a\a\c\f\u\a\b\d\y\o\l\q\3\p\j\v\k\1\v\q\6\n\m\g\2\f\5\d\k\8\p\w\i\4\o\5\4\4\f\0\s\d\c\o\d\5\u\c\7\n\u\o\9\2\2\b\d\b\r\5\s\w\i\0\h\v\x\e\1\q\k\a\e\u\p\4\6\a\i\0\a\h\o\6\z\r\d\y\5\c\2\7\2\3\5\f\c\e\j\4\m\c\g\k\3\q\w\k\5\k\w\t\p\l\g\u\j\f\8\k\s\7\b\h\i\b\w\b\6\8\t\j\o\8\e\1\v\7\l\2\i\s\6\1\p\h\y\m\f\k\t\i\t\u\r\d\c\z\a\m\n\2\x\2\x\f\0\p\s\h\5\d\0\h\n\k\f\j\6\7\8\g\3\u\f\u\3\q\5\n\1\p\q\1\m\j\h\6\n\l\t\4\9\4\y\g\6\s\4\u\o\5\l\d\q\z\i\9\x\m\f\4\4\m\a\x\o\s\t\a\5\x\o\n\c\o\9\4\y\8\k\t\5\h\s\i\1\0\v\x\r\f\6\1\7\1\g\c\9\8\g\d\4\o\j\3\m\9\w\i\5\e\r\h\o\z\e\6\2\v\z\t\u\z\9\z\3\6\p\5\b\v\i\1\h\6\l\i\i\5\o\7\8\b\8\y\4\y\u\h\7\w\4\v\f\l\y\x\e\h\f\3\m\y\y\p\o\u\f\c\k\v\y\5\x\9\p\w\c\1\h\c\q\2\5\i\o\z\0\6\w\b\d\r\6\k\1\9\s\z\v\z\1\2\7\p\z\1\y\z\m\2\w\q\9\n\o\w\y\b\y\k\a\f\5\0\l\5\p\8\s\d\a\6\1\e\h\s\y\w\r\o\e\p\m\r\j\9\s\q\b\h\n ]] 00:06:29.177 00:06:29.177 real 0m1.502s 00:06:29.177 user 0m0.787s 00:06:29.177 sys 0m0.528s 00:06:29.177 ************************************ 00:06:29.177 END TEST dd_flag_nofollow 00:06:29.177 ************************************ 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:29.177 ************************************ 00:06:29.177 START TEST dd_flag_noatime 00:06:29.177 ************************************ 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732648467 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732648467 00:06:29.177 19:14:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:30.114 19:14:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.373 [2024-11-26 19:14:28.593450] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:30.373 [2024-11-26 19:14:28.593558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:06:30.373 [2024-11-26 19:14:28.747231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.373 [2024-11-26 19:14:28.802704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.633 [2024-11-26 19:14:28.861239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.633  [2024-11-26T19:14:29.073Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.633 00:06:30.892 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.892 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732648467 )) 00:06:30.892 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.892 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732648467 )) 00:06:30.892 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.892 [2024-11-26 19:14:29.144474] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:30.892 [2024-11-26 19:14:29.144579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:06:30.892 [2024-11-26 19:14:29.289499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.152 [2024-11-26 19:14:29.335154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.152 [2024-11-26 19:14:29.392750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.152  [2024-11-26T19:14:29.850Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.410 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732648469 )) 00:06:31.410 00:06:31.410 real 0m2.104s 00:06:31.410 user 0m0.576s 00:06:31.410 sys 0m0.587s 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 ************************************ 00:06:31.410 END TEST dd_flag_noatime 00:06:31.410 ************************************ 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 ************************************ 00:06:31.410 START TEST dd_flags_misc 00:06:31.410 ************************************ 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.410 19:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:31.410 [2024-11-26 19:14:29.735728] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:31.410 [2024-11-26 19:14:29.735838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 00:06:31.670 [2024-11-26 19:14:29.881315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.670 [2024-11-26 19:14:29.928489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.670 [2024-11-26 19:14:29.983114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.670  [2024-11-26T19:14:30.370Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.930 00:06:31.930 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bcd3xhwrmkidw7dh7szc1yerxaxcglkkvtauz8xsu9xuxw487wwp9asuls9zq369uj2e4wltqek7zclyrhzd5wbe0qtzkxl8hjejzy93ib0d79c8pg74pjblz3vjzo2voryvmls5hyec44vledhza4v1h7fkzv59czs24yqra8475srrc4onzb4660oik2t2d29tp2w56p63bi8gc2tgxcrq4fwwu4s754xxi77gvoojxf20jbnrf3dycy0blmp1hm5mrs62hnmmcdod8e3m0c2j4g9x27z6yam3mrcg2692ec25hst9c1mne9svrjqeqwxe6crbpex6vjozwg1258e4evr2ogsjbzuncsteqr0m8cqv5ke4x8969014h0hgufjgdtxx6g8hvvj9yt2x3a7gc3mxt2tidpno9j2x79s7y53yhbgx4obvy56wqi1rccisqqkeq6u3mqxqcj6l8nqxqe6mjhljhxyeg9c1s7jewdy51krcxf7akxm2290c == \b\c\d\3\x\h\w\r\m\k\i\d\w\7\d\h\7\s\z\c\1\y\e\r\x\a\x\c\g\l\k\k\v\t\a\u\z\8\x\s\u\9\x\u\x\w\4\8\7\w\w\p\9\a\s\u\l\s\9\z\q\3\6\9\u\j\2\e\4\w\l\t\q\e\k\7\z\c\l\y\r\h\z\d\5\w\b\e\0\q\t\z\k\x\l\8\h\j\e\j\z\y\9\3\i\b\0\d\7\9\c\8\p\g\7\4\p\j\b\l\z\3\v\j\z\o\2\v\o\r\y\v\m\l\s\5\h\y\e\c\4\4\v\l\e\d\h\z\a\4\v\1\h\7\f\k\z\v\5\9\c\z\s\2\4\y\q\r\a\8\4\7\5\s\r\r\c\4\o\n\z\b\4\6\6\0\o\i\k\2\t\2\d\2\9\t\p\2\w\5\6\p\6\3\b\i\8\g\c\2\t\g\x\c\r\q\4\f\w\w\u\4\s\7\5\4\x\x\i\7\7\g\v\o\o\j\x\f\2\0\j\b\n\r\f\3\d\y\c\y\0\b\l\m\p\1\h\m\5\m\r\s\6\2\h\n\m\m\c\d\o\d\8\e\3\m\0\c\2\j\4\g\9\x\2\7\z\6\y\a\m\3\m\r\c\g\2\6\9\2\e\c\2\5\h\s\t\9\c\1\m\n\e\9\s\v\r\j\q\e\q\w\x\e\6\c\r\b\p\e\x\6\v\j\o\z\w\g\1\2\5\8\e\4\e\v\r\2\o\g\s\j\b\z\u\n\c\s\t\e\q\r\0\m\8\c\q\v\5\k\e\4\x\8\9\6\9\0\1\4\h\0\h\g\u\f\j\g\d\t\x\x\6\g\8\h\v\v\j\9\y\t\2\x\3\a\7\g\c\3\m\x\t\2\t\i\d\p\n\o\9\j\2\x\7\9\s\7\y\5\3\y\h\b\g\x\4\o\b\v\y\5\6\w\q\i\1\r\c\c\i\s\q\q\k\e\q\6\u\3\m\q\x\q\c\j\6\l\8\n\q\x\q\e\6\m\j\h\l\j\h\x\y\e\g\9\c\1\s\7\j\e\w\d\y\5\1\k\r\c\x\f\7\a\k\x\m\2\2\9\0\c ]] 00:06:31.930 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.930 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:31.930 [2024-11-26 19:14:30.242260] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:31.930 [2024-11-26 19:14:30.242373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:06:32.189 [2024-11-26 19:14:30.384723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.189 [2024-11-26 19:14:30.430607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.189 [2024-11-26 19:14:30.486172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.189  [2024-11-26T19:14:30.888Z] Copying: 512/512 [B] (average 500 kBps) 00:06:32.448 00:06:32.448 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bcd3xhwrmkidw7dh7szc1yerxaxcglkkvtauz8xsu9xuxw487wwp9asuls9zq369uj2e4wltqek7zclyrhzd5wbe0qtzkxl8hjejzy93ib0d79c8pg74pjblz3vjzo2voryvmls5hyec44vledhza4v1h7fkzv59czs24yqra8475srrc4onzb4660oik2t2d29tp2w56p63bi8gc2tgxcrq4fwwu4s754xxi77gvoojxf20jbnrf3dycy0blmp1hm5mrs62hnmmcdod8e3m0c2j4g9x27z6yam3mrcg2692ec25hst9c1mne9svrjqeqwxe6crbpex6vjozwg1258e4evr2ogsjbzuncsteqr0m8cqv5ke4x8969014h0hgufjgdtxx6g8hvvj9yt2x3a7gc3mxt2tidpno9j2x79s7y53yhbgx4obvy56wqi1rccisqqkeq6u3mqxqcj6l8nqxqe6mjhljhxyeg9c1s7jewdy51krcxf7akxm2290c == \b\c\d\3\x\h\w\r\m\k\i\d\w\7\d\h\7\s\z\c\1\y\e\r\x\a\x\c\g\l\k\k\v\t\a\u\z\8\x\s\u\9\x\u\x\w\4\8\7\w\w\p\9\a\s\u\l\s\9\z\q\3\6\9\u\j\2\e\4\w\l\t\q\e\k\7\z\c\l\y\r\h\z\d\5\w\b\e\0\q\t\z\k\x\l\8\h\j\e\j\z\y\9\3\i\b\0\d\7\9\c\8\p\g\7\4\p\j\b\l\z\3\v\j\z\o\2\v\o\r\y\v\m\l\s\5\h\y\e\c\4\4\v\l\e\d\h\z\a\4\v\1\h\7\f\k\z\v\5\9\c\z\s\2\4\y\q\r\a\8\4\7\5\s\r\r\c\4\o\n\z\b\4\6\6\0\o\i\k\2\t\2\d\2\9\t\p\2\w\5\6\p\6\3\b\i\8\g\c\2\t\g\x\c\r\q\4\f\w\w\u\4\s\7\5\4\x\x\i\7\7\g\v\o\o\j\x\f\2\0\j\b\n\r\f\3\d\y\c\y\0\b\l\m\p\1\h\m\5\m\r\s\6\2\h\n\m\m\c\d\o\d\8\e\3\m\0\c\2\j\4\g\9\x\2\7\z\6\y\a\m\3\m\r\c\g\2\6\9\2\e\c\2\5\h\s\t\9\c\1\m\n\e\9\s\v\r\j\q\e\q\w\x\e\6\c\r\b\p\e\x\6\v\j\o\z\w\g\1\2\5\8\e\4\e\v\r\2\o\g\s\j\b\z\u\n\c\s\t\e\q\r\0\m\8\c\q\v\5\k\e\4\x\8\9\6\9\0\1\4\h\0\h\g\u\f\j\g\d\t\x\x\6\g\8\h\v\v\j\9\y\t\2\x\3\a\7\g\c\3\m\x\t\2\t\i\d\p\n\o\9\j\2\x\7\9\s\7\y\5\3\y\h\b\g\x\4\o\b\v\y\5\6\w\q\i\1\r\c\c\i\s\q\q\k\e\q\6\u\3\m\q\x\q\c\j\6\l\8\n\q\x\q\e\6\m\j\h\l\j\h\x\y\e\g\9\c\1\s\7\j\e\w\d\y\5\1\k\r\c\x\f\7\a\k\x\m\2\2\9\0\c ]] 00:06:32.448 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.449 19:14:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:32.449 [2024-11-26 19:14:30.742549] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:32.449 [2024-11-26 19:14:30.742648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:06:32.708 [2024-11-26 19:14:30.888852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.708 [2024-11-26 19:14:30.934635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.708 [2024-11-26 19:14:30.986634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.708  [2024-11-26T19:14:31.407Z] Copying: 512/512 [B] (average 71 kBps) 00:06:32.967 00:06:32.967 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bcd3xhwrmkidw7dh7szc1yerxaxcglkkvtauz8xsu9xuxw487wwp9asuls9zq369uj2e4wltqek7zclyrhzd5wbe0qtzkxl8hjejzy93ib0d79c8pg74pjblz3vjzo2voryvmls5hyec44vledhza4v1h7fkzv59czs24yqra8475srrc4onzb4660oik2t2d29tp2w56p63bi8gc2tgxcrq4fwwu4s754xxi77gvoojxf20jbnrf3dycy0blmp1hm5mrs62hnmmcdod8e3m0c2j4g9x27z6yam3mrcg2692ec25hst9c1mne9svrjqeqwxe6crbpex6vjozwg1258e4evr2ogsjbzuncsteqr0m8cqv5ke4x8969014h0hgufjgdtxx6g8hvvj9yt2x3a7gc3mxt2tidpno9j2x79s7y53yhbgx4obvy56wqi1rccisqqkeq6u3mqxqcj6l8nqxqe6mjhljhxyeg9c1s7jewdy51krcxf7akxm2290c == \b\c\d\3\x\h\w\r\m\k\i\d\w\7\d\h\7\s\z\c\1\y\e\r\x\a\x\c\g\l\k\k\v\t\a\u\z\8\x\s\u\9\x\u\x\w\4\8\7\w\w\p\9\a\s\u\l\s\9\z\q\3\6\9\u\j\2\e\4\w\l\t\q\e\k\7\z\c\l\y\r\h\z\d\5\w\b\e\0\q\t\z\k\x\l\8\h\j\e\j\z\y\9\3\i\b\0\d\7\9\c\8\p\g\7\4\p\j\b\l\z\3\v\j\z\o\2\v\o\r\y\v\m\l\s\5\h\y\e\c\4\4\v\l\e\d\h\z\a\4\v\1\h\7\f\k\z\v\5\9\c\z\s\2\4\y\q\r\a\8\4\7\5\s\r\r\c\4\o\n\z\b\4\6\6\0\o\i\k\2\t\2\d\2\9\t\p\2\w\5\6\p\6\3\b\i\8\g\c\2\t\g\x\c\r\q\4\f\w\w\u\4\s\7\5\4\x\x\i\7\7\g\v\o\o\j\x\f\2\0\j\b\n\r\f\3\d\y\c\y\0\b\l\m\p\1\h\m\5\m\r\s\6\2\h\n\m\m\c\d\o\d\8\e\3\m\0\c\2\j\4\g\9\x\2\7\z\6\y\a\m\3\m\r\c\g\2\6\9\2\e\c\2\5\h\s\t\9\c\1\m\n\e\9\s\v\r\j\q\e\q\w\x\e\6\c\r\b\p\e\x\6\v\j\o\z\w\g\1\2\5\8\e\4\e\v\r\2\o\g\s\j\b\z\u\n\c\s\t\e\q\r\0\m\8\c\q\v\5\k\e\4\x\8\9\6\9\0\1\4\h\0\h\g\u\f\j\g\d\t\x\x\6\g\8\h\v\v\j\9\y\t\2\x\3\a\7\g\c\3\m\x\t\2\t\i\d\p\n\o\9\j\2\x\7\9\s\7\y\5\3\y\h\b\g\x\4\o\b\v\y\5\6\w\q\i\1\r\c\c\i\s\q\q\k\e\q\6\u\3\m\q\x\q\c\j\6\l\8\n\q\x\q\e\6\m\j\h\l\j\h\x\y\e\g\9\c\1\s\7\j\e\w\d\y\5\1\k\r\c\x\f\7\a\k\x\m\2\2\9\0\c ]] 00:06:32.967 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.967 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:32.967 [2024-11-26 19:14:31.247216] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:32.967 [2024-11-26 19:14:31.247317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:06:32.967 [2024-11-26 19:14:31.391159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.227 [2024-11-26 19:14:31.436852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.227 [2024-11-26 19:14:31.492736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.227  [2024-11-26T19:14:31.927Z] Copying: 512/512 [B] (average 250 kBps) 00:06:33.487 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bcd3xhwrmkidw7dh7szc1yerxaxcglkkvtauz8xsu9xuxw487wwp9asuls9zq369uj2e4wltqek7zclyrhzd5wbe0qtzkxl8hjejzy93ib0d79c8pg74pjblz3vjzo2voryvmls5hyec44vledhza4v1h7fkzv59czs24yqra8475srrc4onzb4660oik2t2d29tp2w56p63bi8gc2tgxcrq4fwwu4s754xxi77gvoojxf20jbnrf3dycy0blmp1hm5mrs62hnmmcdod8e3m0c2j4g9x27z6yam3mrcg2692ec25hst9c1mne9svrjqeqwxe6crbpex6vjozwg1258e4evr2ogsjbzuncsteqr0m8cqv5ke4x8969014h0hgufjgdtxx6g8hvvj9yt2x3a7gc3mxt2tidpno9j2x79s7y53yhbgx4obvy56wqi1rccisqqkeq6u3mqxqcj6l8nqxqe6mjhljhxyeg9c1s7jewdy51krcxf7akxm2290c == \b\c\d\3\x\h\w\r\m\k\i\d\w\7\d\h\7\s\z\c\1\y\e\r\x\a\x\c\g\l\k\k\v\t\a\u\z\8\x\s\u\9\x\u\x\w\4\8\7\w\w\p\9\a\s\u\l\s\9\z\q\3\6\9\u\j\2\e\4\w\l\t\q\e\k\7\z\c\l\y\r\h\z\d\5\w\b\e\0\q\t\z\k\x\l\8\h\j\e\j\z\y\9\3\i\b\0\d\7\9\c\8\p\g\7\4\p\j\b\l\z\3\v\j\z\o\2\v\o\r\y\v\m\l\s\5\h\y\e\c\4\4\v\l\e\d\h\z\a\4\v\1\h\7\f\k\z\v\5\9\c\z\s\2\4\y\q\r\a\8\4\7\5\s\r\r\c\4\o\n\z\b\4\6\6\0\o\i\k\2\t\2\d\2\9\t\p\2\w\5\6\p\6\3\b\i\8\g\c\2\t\g\x\c\r\q\4\f\w\w\u\4\s\7\5\4\x\x\i\7\7\g\v\o\o\j\x\f\2\0\j\b\n\r\f\3\d\y\c\y\0\b\l\m\p\1\h\m\5\m\r\s\6\2\h\n\m\m\c\d\o\d\8\e\3\m\0\c\2\j\4\g\9\x\2\7\z\6\y\a\m\3\m\r\c\g\2\6\9\2\e\c\2\5\h\s\t\9\c\1\m\n\e\9\s\v\r\j\q\e\q\w\x\e\6\c\r\b\p\e\x\6\v\j\o\z\w\g\1\2\5\8\e\4\e\v\r\2\o\g\s\j\b\z\u\n\c\s\t\e\q\r\0\m\8\c\q\v\5\k\e\4\x\8\9\6\9\0\1\4\h\0\h\g\u\f\j\g\d\t\x\x\6\g\8\h\v\v\j\9\y\t\2\x\3\a\7\g\c\3\m\x\t\2\t\i\d\p\n\o\9\j\2\x\7\9\s\7\y\5\3\y\h\b\g\x\4\o\b\v\y\5\6\w\q\i\1\r\c\c\i\s\q\q\k\e\q\6\u\3\m\q\x\q\c\j\6\l\8\n\q\x\q\e\6\m\j\h\l\j\h\x\y\e\g\9\c\1\s\7\j\e\w\d\y\5\1\k\r\c\x\f\7\a\k\x\m\2\2\9\0\c ]] 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.487 19:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:33.487 [2024-11-26 19:14:31.760933] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:33.487 [2024-11-26 19:14:31.761036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60179 ] 00:06:33.487 [2024-11-26 19:14:31.904502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.746 [2024-11-26 19:14:31.950517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.746 [2024-11-26 19:14:32.003433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.746  [2024-11-26T19:14:32.447Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.007 00:06:34.007 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5y5e4hqvjikvwdld60pftln53kbe76kkazi2ioc1c0c3lahj34fueudlg3rnog2atjzi8nwkineg0ctny4jfdfu26fnr54lhodjrm8xchptrm12i2n0wzyxh4lbpzzh2f4rt1qysg7u64jswmiu2gj9ygfy5quhr7kwjjc3um8mo535cvh7r28a79w27zmod486pb8b6qwrc6d9ej0sqr6kfqi116194a0xg892mc8p94m4hppvuamz2k44duswednim0rs3mj9138zns7l9yhbvx6nb8ra1jzweh649iae8cpppdt2as1gfp5cz6gykaut9cuflq2tgggi02w71wcoqfltnrr56h9f8hiq606hkns4tn06qk94kww6wlq70zmx50e149tixti38lr1e6k35gvjrk6czf4mrkr9nncwtl9adex7pe2ejslizoewehuibmyogn0ynb2zoov495fp2btdpdxvo2nq4n4rjfevjbtizi7urocmfhgev01k == \b\5\y\5\e\4\h\q\v\j\i\k\v\w\d\l\d\6\0\p\f\t\l\n\5\3\k\b\e\7\6\k\k\a\z\i\2\i\o\c\1\c\0\c\3\l\a\h\j\3\4\f\u\e\u\d\l\g\3\r\n\o\g\2\a\t\j\z\i\8\n\w\k\i\n\e\g\0\c\t\n\y\4\j\f\d\f\u\2\6\f\n\r\5\4\l\h\o\d\j\r\m\8\x\c\h\p\t\r\m\1\2\i\2\n\0\w\z\y\x\h\4\l\b\p\z\z\h\2\f\4\r\t\1\q\y\s\g\7\u\6\4\j\s\w\m\i\u\2\g\j\9\y\g\f\y\5\q\u\h\r\7\k\w\j\j\c\3\u\m\8\m\o\5\3\5\c\v\h\7\r\2\8\a\7\9\w\2\7\z\m\o\d\4\8\6\p\b\8\b\6\q\w\r\c\6\d\9\e\j\0\s\q\r\6\k\f\q\i\1\1\6\1\9\4\a\0\x\g\8\9\2\m\c\8\p\9\4\m\4\h\p\p\v\u\a\m\z\2\k\4\4\d\u\s\w\e\d\n\i\m\0\r\s\3\m\j\9\1\3\8\z\n\s\7\l\9\y\h\b\v\x\6\n\b\8\r\a\1\j\z\w\e\h\6\4\9\i\a\e\8\c\p\p\p\d\t\2\a\s\1\g\f\p\5\c\z\6\g\y\k\a\u\t\9\c\u\f\l\q\2\t\g\g\g\i\0\2\w\7\1\w\c\o\q\f\l\t\n\r\r\5\6\h\9\f\8\h\i\q\6\0\6\h\k\n\s\4\t\n\0\6\q\k\9\4\k\w\w\6\w\l\q\7\0\z\m\x\5\0\e\1\4\9\t\i\x\t\i\3\8\l\r\1\e\6\k\3\5\g\v\j\r\k\6\c\z\f\4\m\r\k\r\9\n\n\c\w\t\l\9\a\d\e\x\7\p\e\2\e\j\s\l\i\z\o\e\w\e\h\u\i\b\m\y\o\g\n\0\y\n\b\2\z\o\o\v\4\9\5\f\p\2\b\t\d\p\d\x\v\o\2\n\q\4\n\4\r\j\f\e\v\j\b\t\i\z\i\7\u\r\o\c\m\f\h\g\e\v\0\1\k ]] 00:06:34.007 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.007 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:34.007 [2024-11-26 19:14:32.254863] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:34.007 [2024-11-26 19:14:32.254976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60183 ] 00:06:34.007 [2024-11-26 19:14:32.400523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.007 [2024-11-26 19:14:32.444124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.266 [2024-11-26 19:14:32.499830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.266  [2024-11-26T19:14:32.706Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.266 00:06:34.525 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5y5e4hqvjikvwdld60pftln53kbe76kkazi2ioc1c0c3lahj34fueudlg3rnog2atjzi8nwkineg0ctny4jfdfu26fnr54lhodjrm8xchptrm12i2n0wzyxh4lbpzzh2f4rt1qysg7u64jswmiu2gj9ygfy5quhr7kwjjc3um8mo535cvh7r28a79w27zmod486pb8b6qwrc6d9ej0sqr6kfqi116194a0xg892mc8p94m4hppvuamz2k44duswednim0rs3mj9138zns7l9yhbvx6nb8ra1jzweh649iae8cpppdt2as1gfp5cz6gykaut9cuflq2tgggi02w71wcoqfltnrr56h9f8hiq606hkns4tn06qk94kww6wlq70zmx50e149tixti38lr1e6k35gvjrk6czf4mrkr9nncwtl9adex7pe2ejslizoewehuibmyogn0ynb2zoov495fp2btdpdxvo2nq4n4rjfevjbtizi7urocmfhgev01k == \b\5\y\5\e\4\h\q\v\j\i\k\v\w\d\l\d\6\0\p\f\t\l\n\5\3\k\b\e\7\6\k\k\a\z\i\2\i\o\c\1\c\0\c\3\l\a\h\j\3\4\f\u\e\u\d\l\g\3\r\n\o\g\2\a\t\j\z\i\8\n\w\k\i\n\e\g\0\c\t\n\y\4\j\f\d\f\u\2\6\f\n\r\5\4\l\h\o\d\j\r\m\8\x\c\h\p\t\r\m\1\2\i\2\n\0\w\z\y\x\h\4\l\b\p\z\z\h\2\f\4\r\t\1\q\y\s\g\7\u\6\4\j\s\w\m\i\u\2\g\j\9\y\g\f\y\5\q\u\h\r\7\k\w\j\j\c\3\u\m\8\m\o\5\3\5\c\v\h\7\r\2\8\a\7\9\w\2\7\z\m\o\d\4\8\6\p\b\8\b\6\q\w\r\c\6\d\9\e\j\0\s\q\r\6\k\f\q\i\1\1\6\1\9\4\a\0\x\g\8\9\2\m\c\8\p\9\4\m\4\h\p\p\v\u\a\m\z\2\k\4\4\d\u\s\w\e\d\n\i\m\0\r\s\3\m\j\9\1\3\8\z\n\s\7\l\9\y\h\b\v\x\6\n\b\8\r\a\1\j\z\w\e\h\6\4\9\i\a\e\8\c\p\p\p\d\t\2\a\s\1\g\f\p\5\c\z\6\g\y\k\a\u\t\9\c\u\f\l\q\2\t\g\g\g\i\0\2\w\7\1\w\c\o\q\f\l\t\n\r\r\5\6\h\9\f\8\h\i\q\6\0\6\h\k\n\s\4\t\n\0\6\q\k\9\4\k\w\w\6\w\l\q\7\0\z\m\x\5\0\e\1\4\9\t\i\x\t\i\3\8\l\r\1\e\6\k\3\5\g\v\j\r\k\6\c\z\f\4\m\r\k\r\9\n\n\c\w\t\l\9\a\d\e\x\7\p\e\2\e\j\s\l\i\z\o\e\w\e\h\u\i\b\m\y\o\g\n\0\y\n\b\2\z\o\o\v\4\9\5\f\p\2\b\t\d\p\d\x\v\o\2\n\q\4\n\4\r\j\f\e\v\j\b\t\i\z\i\7\u\r\o\c\m\f\h\g\e\v\0\1\k ]] 00:06:34.525 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.525 19:14:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:34.525 [2024-11-26 19:14:32.763953] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:34.525 [2024-11-26 19:14:32.764550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:06:34.525 [2024-11-26 19:14:32.910614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.525 [2024-11-26 19:14:32.956526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.785 [2024-11-26 19:14:33.009618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.785  [2024-11-26T19:14:33.225Z] Copying: 512/512 [B] (average 250 kBps) 00:06:34.785 00:06:34.785 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5y5e4hqvjikvwdld60pftln53kbe76kkazi2ioc1c0c3lahj34fueudlg3rnog2atjzi8nwkineg0ctny4jfdfu26fnr54lhodjrm8xchptrm12i2n0wzyxh4lbpzzh2f4rt1qysg7u64jswmiu2gj9ygfy5quhr7kwjjc3um8mo535cvh7r28a79w27zmod486pb8b6qwrc6d9ej0sqr6kfqi116194a0xg892mc8p94m4hppvuamz2k44duswednim0rs3mj9138zns7l9yhbvx6nb8ra1jzweh649iae8cpppdt2as1gfp5cz6gykaut9cuflq2tgggi02w71wcoqfltnrr56h9f8hiq606hkns4tn06qk94kww6wlq70zmx50e149tixti38lr1e6k35gvjrk6czf4mrkr9nncwtl9adex7pe2ejslizoewehuibmyogn0ynb2zoov495fp2btdpdxvo2nq4n4rjfevjbtizi7urocmfhgev01k == \b\5\y\5\e\4\h\q\v\j\i\k\v\w\d\l\d\6\0\p\f\t\l\n\5\3\k\b\e\7\6\k\k\a\z\i\2\i\o\c\1\c\0\c\3\l\a\h\j\3\4\f\u\e\u\d\l\g\3\r\n\o\g\2\a\t\j\z\i\8\n\w\k\i\n\e\g\0\c\t\n\y\4\j\f\d\f\u\2\6\f\n\r\5\4\l\h\o\d\j\r\m\8\x\c\h\p\t\r\m\1\2\i\2\n\0\w\z\y\x\h\4\l\b\p\z\z\h\2\f\4\r\t\1\q\y\s\g\7\u\6\4\j\s\w\m\i\u\2\g\j\9\y\g\f\y\5\q\u\h\r\7\k\w\j\j\c\3\u\m\8\m\o\5\3\5\c\v\h\7\r\2\8\a\7\9\w\2\7\z\m\o\d\4\8\6\p\b\8\b\6\q\w\r\c\6\d\9\e\j\0\s\q\r\6\k\f\q\i\1\1\6\1\9\4\a\0\x\g\8\9\2\m\c\8\p\9\4\m\4\h\p\p\v\u\a\m\z\2\k\4\4\d\u\s\w\e\d\n\i\m\0\r\s\3\m\j\9\1\3\8\z\n\s\7\l\9\y\h\b\v\x\6\n\b\8\r\a\1\j\z\w\e\h\6\4\9\i\a\e\8\c\p\p\p\d\t\2\a\s\1\g\f\p\5\c\z\6\g\y\k\a\u\t\9\c\u\f\l\q\2\t\g\g\g\i\0\2\w\7\1\w\c\o\q\f\l\t\n\r\r\5\6\h\9\f\8\h\i\q\6\0\6\h\k\n\s\4\t\n\0\6\q\k\9\4\k\w\w\6\w\l\q\7\0\z\m\x\5\0\e\1\4\9\t\i\x\t\i\3\8\l\r\1\e\6\k\3\5\g\v\j\r\k\6\c\z\f\4\m\r\k\r\9\n\n\c\w\t\l\9\a\d\e\x\7\p\e\2\e\j\s\l\i\z\o\e\w\e\h\u\i\b\m\y\o\g\n\0\y\n\b\2\z\o\o\v\4\9\5\f\p\2\b\t\d\p\d\x\v\o\2\n\q\4\n\4\r\j\f\e\v\j\b\t\i\z\i\7\u\r\o\c\m\f\h\g\e\v\0\1\k ]] 00:06:34.785 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.785 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:35.045 [2024-11-26 19:14:33.267621] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:35.045 [2024-11-26 19:14:33.267731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60203 ] 00:06:35.045 [2024-11-26 19:14:33.412628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.045 [2024-11-26 19:14:33.455954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.304 [2024-11-26 19:14:33.512722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.304  [2024-11-26T19:14:33.744Z] Copying: 512/512 [B] (average 250 kBps) 00:06:35.304 00:06:35.304 ************************************ 00:06:35.304 END TEST dd_flags_misc 00:06:35.304 ************************************ 00:06:35.304 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5y5e4hqvjikvwdld60pftln53kbe76kkazi2ioc1c0c3lahj34fueudlg3rnog2atjzi8nwkineg0ctny4jfdfu26fnr54lhodjrm8xchptrm12i2n0wzyxh4lbpzzh2f4rt1qysg7u64jswmiu2gj9ygfy5quhr7kwjjc3um8mo535cvh7r28a79w27zmod486pb8b6qwrc6d9ej0sqr6kfqi116194a0xg892mc8p94m4hppvuamz2k44duswednim0rs3mj9138zns7l9yhbvx6nb8ra1jzweh649iae8cpppdt2as1gfp5cz6gykaut9cuflq2tgggi02w71wcoqfltnrr56h9f8hiq606hkns4tn06qk94kww6wlq70zmx50e149tixti38lr1e6k35gvjrk6czf4mrkr9nncwtl9adex7pe2ejslizoewehuibmyogn0ynb2zoov495fp2btdpdxvo2nq4n4rjfevjbtizi7urocmfhgev01k == \b\5\y\5\e\4\h\q\v\j\i\k\v\w\d\l\d\6\0\p\f\t\l\n\5\3\k\b\e\7\6\k\k\a\z\i\2\i\o\c\1\c\0\c\3\l\a\h\j\3\4\f\u\e\u\d\l\g\3\r\n\o\g\2\a\t\j\z\i\8\n\w\k\i\n\e\g\0\c\t\n\y\4\j\f\d\f\u\2\6\f\n\r\5\4\l\h\o\d\j\r\m\8\x\c\h\p\t\r\m\1\2\i\2\n\0\w\z\y\x\h\4\l\b\p\z\z\h\2\f\4\r\t\1\q\y\s\g\7\u\6\4\j\s\w\m\i\u\2\g\j\9\y\g\f\y\5\q\u\h\r\7\k\w\j\j\c\3\u\m\8\m\o\5\3\5\c\v\h\7\r\2\8\a\7\9\w\2\7\z\m\o\d\4\8\6\p\b\8\b\6\q\w\r\c\6\d\9\e\j\0\s\q\r\6\k\f\q\i\1\1\6\1\9\4\a\0\x\g\8\9\2\m\c\8\p\9\4\m\4\h\p\p\v\u\a\m\z\2\k\4\4\d\u\s\w\e\d\n\i\m\0\r\s\3\m\j\9\1\3\8\z\n\s\7\l\9\y\h\b\v\x\6\n\b\8\r\a\1\j\z\w\e\h\6\4\9\i\a\e\8\c\p\p\p\d\t\2\a\s\1\g\f\p\5\c\z\6\g\y\k\a\u\t\9\c\u\f\l\q\2\t\g\g\g\i\0\2\w\7\1\w\c\o\q\f\l\t\n\r\r\5\6\h\9\f\8\h\i\q\6\0\6\h\k\n\s\4\t\n\0\6\q\k\9\4\k\w\w\6\w\l\q\7\0\z\m\x\5\0\e\1\4\9\t\i\x\t\i\3\8\l\r\1\e\6\k\3\5\g\v\j\r\k\6\c\z\f\4\m\r\k\r\9\n\n\c\w\t\l\9\a\d\e\x\7\p\e\2\e\j\s\l\i\z\o\e\w\e\h\u\i\b\m\y\o\g\n\0\y\n\b\2\z\o\o\v\4\9\5\f\p\2\b\t\d\p\d\x\v\o\2\n\q\4\n\4\r\j\f\e\v\j\b\t\i\z\i\7\u\r\o\c\m\f\h\g\e\v\0\1\k ]] 00:06:35.304 00:06:35.304 real 0m4.047s 00:06:35.304 user 0m2.111s 00:06:35.304 sys 0m2.107s 00:06:35.304 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.304 19:14:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:35.563 * Second test run, disabling liburing, forcing AIO 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:35.563 ************************************ 00:06:35.563 START TEST dd_flag_append_forced_aio 00:06:35.563 ************************************ 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=mj3cogrc0egyvb5asv1yvcrq6qkqse81 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=pd0tsenq86gwp5hlrkhqp0yv4vfw9xvw 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s mj3cogrc0egyvb5asv1yvcrq6qkqse81 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s pd0tsenq86gwp5hlrkhqp0yv4vfw9xvw 00:06:35.563 19:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:35.563 [2024-11-26 19:14:33.834133] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:35.563 [2024-11-26 19:14:33.834412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:06:35.563 [2024-11-26 19:14:33.981765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.821 [2024-11-26 19:14:34.029393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.821 [2024-11-26 19:14:34.081541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.821  [2024-11-26T19:14:34.520Z] Copying: 32/32 [B] (average 31 kBps) 00:06:36.080 00:06:36.080 ************************************ 00:06:36.080 END TEST dd_flag_append_forced_aio 00:06:36.080 ************************************ 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ pd0tsenq86gwp5hlrkhqp0yv4vfw9xvwmj3cogrc0egyvb5asv1yvcrq6qkqse81 == \p\d\0\t\s\e\n\q\8\6\g\w\p\5\h\l\r\k\h\q\p\0\y\v\4\v\f\w\9\x\v\w\m\j\3\c\o\g\r\c\0\e\g\y\v\b\5\a\s\v\1\y\v\c\r\q\6\q\k\q\s\e\8\1 ]] 00:06:36.080 00:06:36.080 real 0m0.525s 00:06:36.080 user 0m0.272s 00:06:36.080 sys 0m0.133s 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.080 ************************************ 00:06:36.080 START TEST dd_flag_directory_forced_aio 00:06:36.080 ************************************ 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.080 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.080 [2024-11-26 19:14:34.406615] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:36.080 [2024-11-26 19:14:34.406697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60258 ] 00:06:36.339 [2024-11-26 19:14:34.555604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.339 [2024-11-26 19:14:34.600691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.339 [2024-11-26 19:14:34.652710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.339 [2024-11-26 19:14:34.685236] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.339 [2024-11-26 19:14:34.685298] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.339 [2024-11-26 19:14:34.685315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.598 [2024-11-26 19:14:34.799423] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.598 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.599 19:14:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.599 [2024-11-26 19:14:34.913454] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:36.599 [2024-11-26 19:14:34.913562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60273 ] 00:06:36.858 [2024-11-26 19:14:35.059601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.858 [2024-11-26 19:14:35.103155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.858 [2024-11-26 19:14:35.156010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.858 [2024-11-26 19:14:35.190117] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.858 [2024-11-26 19:14:35.190180] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.858 [2024-11-26 19:14:35.190213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.117 [2024-11-26 19:14:35.301306] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.117 ************************************ 00:06:37.117 END TEST dd_flag_directory_forced_aio 00:06:37.117 ************************************ 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.117 00:06:37.117 real 0m1.009s 00:06:37.117 user 0m0.529s 00:06:37.117 sys 0m0.273s 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 ************************************ 00:06:37.117 START TEST dd_flag_nofollow_forced_aio 00:06:37.117 ************************************ 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.117 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.117 [2024-11-26 19:14:35.479137] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:37.117 [2024-11-26 19:14:35.479223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60296 ] 00:06:37.376 [2024-11-26 19:14:35.625406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.376 [2024-11-26 19:14:35.669192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.376 [2024-11-26 19:14:35.722148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.376 [2024-11-26 19:14:35.756000] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:37.376 [2024-11-26 19:14:35.756306] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:37.376 [2024-11-26 19:14:35.756334] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.634 [2024-11-26 19:14:35.871519] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.634 19:14:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.634 [2024-11-26 19:14:35.983022] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:37.634 [2024-11-26 19:14:35.983109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60311 ] 00:06:37.893 [2024-11-26 19:14:36.128854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.893 [2024-11-26 19:14:36.174245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.893 [2024-11-26 19:14:36.226847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.893 [2024-11-26 19:14:36.260489] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:37.893 [2024-11-26 19:14:36.260551] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:37.893 [2024-11-26 19:14:36.260584] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.151 [2024-11-26 19:14:36.373514] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:38.151 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:38.151 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.151 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:38.151 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.152 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.152 [2024-11-26 19:14:36.489213] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:38.152 [2024-11-26 19:14:36.489311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:06:38.412 [2024-11-26 19:14:36.633591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.412 [2024-11-26 19:14:36.678318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.412 [2024-11-26 19:14:36.732689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.412  [2024-11-26T19:14:37.111Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.671 00:06:38.671 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ kp42tzbglib8u9jvkyz68egx0tmip1btiv0ytvo8zoqpglf6oj8wip1ocu8w4r5pqjrsqhr8zxh4zlvjvcg6b8rf3ylkkd7pnv8q6m0zeq6cowui9im8nknhtq8hcp3ncwocb1h4qs9yvsowegrdd0gol9i96x49ml8zslh4g6j8qp6oxq5bl2d3v5rl0oizcclb7eufh5hy5onv9h4ulz52soioue8pluz8sn77lmbgi7qm6fv0c7jx5lpjuq14inufq5e9vql9gpnpo67muz4xdo7uf0b11ofqhvd2trn4jpbfmkf41ghknr8ib5xfx2qyrcbux0wwhaxbl9or2x8yrv5b4heh3g8d3ntgkziufnzn6pmszhnwrlmh2hxuvf5ddr6bifyx2g711pn1ckjunu5ugy1f935ueqyztlxq65de6vtr3kztwnb5pjflyadnkj8vw6c0jjhjt2qxgck7nx6me1zgiyyetl25a1060qvmthlp2rroo4u0wanv == \k\p\4\2\t\z\b\g\l\i\b\8\u\9\j\v\k\y\z\6\8\e\g\x\0\t\m\i\p\1\b\t\i\v\0\y\t\v\o\8\z\o\q\p\g\l\f\6\o\j\8\w\i\p\1\o\c\u\8\w\4\r\5\p\q\j\r\s\q\h\r\8\z\x\h\4\z\l\v\j\v\c\g\6\b\8\r\f\3\y\l\k\k\d\7\p\n\v\8\q\6\m\0\z\e\q\6\c\o\w\u\i\9\i\m\8\n\k\n\h\t\q\8\h\c\p\3\n\c\w\o\c\b\1\h\4\q\s\9\y\v\s\o\w\e\g\r\d\d\0\g\o\l\9\i\9\6\x\4\9\m\l\8\z\s\l\h\4\g\6\j\8\q\p\6\o\x\q\5\b\l\2\d\3\v\5\r\l\0\o\i\z\c\c\l\b\7\e\u\f\h\5\h\y\5\o\n\v\9\h\4\u\l\z\5\2\s\o\i\o\u\e\8\p\l\u\z\8\s\n\7\7\l\m\b\g\i\7\q\m\6\f\v\0\c\7\j\x\5\l\p\j\u\q\1\4\i\n\u\f\q\5\e\9\v\q\l\9\g\p\n\p\o\6\7\m\u\z\4\x\d\o\7\u\f\0\b\1\1\o\f\q\h\v\d\2\t\r\n\4\j\p\b\f\m\k\f\4\1\g\h\k\n\r\8\i\b\5\x\f\x\2\q\y\r\c\b\u\x\0\w\w\h\a\x\b\l\9\o\r\2\x\8\y\r\v\5\b\4\h\e\h\3\g\8\d\3\n\t\g\k\z\i\u\f\n\z\n\6\p\m\s\z\h\n\w\r\l\m\h\2\h\x\u\v\f\5\d\d\r\6\b\i\f\y\x\2\g\7\1\1\p\n\1\c\k\j\u\n\u\5\u\g\y\1\f\9\3\5\u\e\q\y\z\t\l\x\q\6\5\d\e\6\v\t\r\3\k\z\t\w\n\b\5\p\j\f\l\y\a\d\n\k\j\8\v\w\6\c\0\j\j\h\j\t\2\q\x\g\c\k\7\n\x\6\m\e\1\z\g\i\y\y\e\t\l\2\5\a\1\0\6\0\q\v\m\t\h\l\p\2\r\r\o\o\4\u\0\w\a\n\v ]] 00:06:38.671 00:06:38.671 real 0m1.566s 00:06:38.671 user 0m0.808s 00:06:38.671 sys 0m0.417s 00:06:38.671 ************************************ 00:06:38.671 END TEST dd_flag_nofollow_forced_aio 00:06:38.671 ************************************ 00:06:38.671 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.671 19:14:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.671 ************************************ 00:06:38.671 START TEST dd_flag_noatime_forced_aio 00:06:38.671 ************************************ 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732648476 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732648476 00:06:38.671 19:14:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:40.048 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.048 [2024-11-26 19:14:38.109153] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:40.048 [2024-11-26 19:14:38.109245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:06:40.048 [2024-11-26 19:14:38.261026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.048 [2024-11-26 19:14:38.318952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.048 [2024-11-26 19:14:38.376175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.048  [2024-11-26T19:14:38.747Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.307 00:06:40.307 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.307 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732648476 )) 00:06:40.307 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.307 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732648476 )) 00:06:40.307 19:14:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.307 [2024-11-26 19:14:38.663712] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:40.307 [2024-11-26 19:14:38.663811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:06:40.566 [2024-11-26 19:14:38.815457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.566 [2024-11-26 19:14:38.871601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.566 [2024-11-26 19:14:38.930546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.566  [2024-11-26T19:14:39.265Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.825 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732648478 )) 00:06:40.825 00:06:40.825 real 0m2.148s 00:06:40.825 user 0m0.613s 00:06:40.825 sys 0m0.295s 00:06:40.825 ************************************ 00:06:40.825 END TEST dd_flag_noatime_forced_aio 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.825 ************************************ 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:40.825 ************************************ 00:06:40.825 START TEST dd_flags_misc_forced_aio 00:06:40.825 ************************************ 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.825 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.083 [2024-11-26 19:14:39.294389] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:41.083 [2024-11-26 19:14:39.294493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:06:41.083 [2024-11-26 19:14:39.441698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.083 [2024-11-26 19:14:39.487983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.341 [2024-11-26 19:14:39.542880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.341  [2024-11-26T19:14:40.040Z] Copying: 512/512 [B] (average 500 kBps) 00:06:41.600 00:06:41.601 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ym9x1190h6ykksx5xv01cr7pvb5fgb108mf5wkfc2yvohub10dd4exi3bmj6nywob7k1ht22939g52tws1n3u77zn5rv6clhcpvzbu164ubq932nececo41t1wdb9d4ry1uu0jsuo0dk539fp9fecq4im8p0dxfn8msgmbv59ipq4qqkbddru5ounbjrh11ci1itbqe4vyw3hnfeg07671j2a5uocxh1w08zi7ridg7p4sb0vcv60xqf1fisaohp9dpe9437wy8drzymktk3e15zuphimy6wzmg2j58z3v9z62r6r7wf4yjit4o2y63ruf0k0e5hxvcmr3hlzvd7l59p86p6a42zwvunncwkt0vs9rnz8ieoacqcnz8ya68nqn63yndonj51svpdbw3tvknzvwyt57yqcjc05mb6tqfyeyklq78iip9r1hmsaul79qtv7azbvwaz5kgrfqhau21hqif8r8znuopbe6ckllqeukb8szrmc8i344dozbh3 == \y\m\9\x\1\1\9\0\h\6\y\k\k\s\x\5\x\v\0\1\c\r\7\p\v\b\5\f\g\b\1\0\8\m\f\5\w\k\f\c\2\y\v\o\h\u\b\1\0\d\d\4\e\x\i\3\b\m\j\6\n\y\w\o\b\7\k\1\h\t\2\2\9\3\9\g\5\2\t\w\s\1\n\3\u\7\7\z\n\5\r\v\6\c\l\h\c\p\v\z\b\u\1\6\4\u\b\q\9\3\2\n\e\c\e\c\o\4\1\t\1\w\d\b\9\d\4\r\y\1\u\u\0\j\s\u\o\0\d\k\5\3\9\f\p\9\f\e\c\q\4\i\m\8\p\0\d\x\f\n\8\m\s\g\m\b\v\5\9\i\p\q\4\q\q\k\b\d\d\r\u\5\o\u\n\b\j\r\h\1\1\c\i\1\i\t\b\q\e\4\v\y\w\3\h\n\f\e\g\0\7\6\7\1\j\2\a\5\u\o\c\x\h\1\w\0\8\z\i\7\r\i\d\g\7\p\4\s\b\0\v\c\v\6\0\x\q\f\1\f\i\s\a\o\h\p\9\d\p\e\9\4\3\7\w\y\8\d\r\z\y\m\k\t\k\3\e\1\5\z\u\p\h\i\m\y\6\w\z\m\g\2\j\5\8\z\3\v\9\z\6\2\r\6\r\7\w\f\4\y\j\i\t\4\o\2\y\6\3\r\u\f\0\k\0\e\5\h\x\v\c\m\r\3\h\l\z\v\d\7\l\5\9\p\8\6\p\6\a\4\2\z\w\v\u\n\n\c\w\k\t\0\v\s\9\r\n\z\8\i\e\o\a\c\q\c\n\z\8\y\a\6\8\n\q\n\6\3\y\n\d\o\n\j\5\1\s\v\p\d\b\w\3\t\v\k\n\z\v\w\y\t\5\7\y\q\c\j\c\0\5\m\b\6\t\q\f\y\e\y\k\l\q\7\8\i\i\p\9\r\1\h\m\s\a\u\l\7\9\q\t\v\7\a\z\b\v\w\a\z\5\k\g\r\f\q\h\a\u\2\1\h\q\i\f\8\r\8\z\n\u\o\p\b\e\6\c\k\l\l\q\e\u\k\b\8\s\z\r\m\c\8\i\3\4\4\d\o\z\b\h\3 ]] 00:06:41.601 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.601 19:14:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:41.601 [2024-11-26 19:14:39.838310] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:41.601 [2024-11-26 19:14:39.838406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60405 ] 00:06:41.601 [2024-11-26 19:14:39.984412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.859 [2024-11-26 19:14:40.041949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.859 [2024-11-26 19:14:40.096470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.859  [2024-11-26T19:14:40.558Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.118 00:06:42.118 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ym9x1190h6ykksx5xv01cr7pvb5fgb108mf5wkfc2yvohub10dd4exi3bmj6nywob7k1ht22939g52tws1n3u77zn5rv6clhcpvzbu164ubq932nececo41t1wdb9d4ry1uu0jsuo0dk539fp9fecq4im8p0dxfn8msgmbv59ipq4qqkbddru5ounbjrh11ci1itbqe4vyw3hnfeg07671j2a5uocxh1w08zi7ridg7p4sb0vcv60xqf1fisaohp9dpe9437wy8drzymktk3e15zuphimy6wzmg2j58z3v9z62r6r7wf4yjit4o2y63ruf0k0e5hxvcmr3hlzvd7l59p86p6a42zwvunncwkt0vs9rnz8ieoacqcnz8ya68nqn63yndonj51svpdbw3tvknzvwyt57yqcjc05mb6tqfyeyklq78iip9r1hmsaul79qtv7azbvwaz5kgrfqhau21hqif8r8znuopbe6ckllqeukb8szrmc8i344dozbh3 == \y\m\9\x\1\1\9\0\h\6\y\k\k\s\x\5\x\v\0\1\c\r\7\p\v\b\5\f\g\b\1\0\8\m\f\5\w\k\f\c\2\y\v\o\h\u\b\1\0\d\d\4\e\x\i\3\b\m\j\6\n\y\w\o\b\7\k\1\h\t\2\2\9\3\9\g\5\2\t\w\s\1\n\3\u\7\7\z\n\5\r\v\6\c\l\h\c\p\v\z\b\u\1\6\4\u\b\q\9\3\2\n\e\c\e\c\o\4\1\t\1\w\d\b\9\d\4\r\y\1\u\u\0\j\s\u\o\0\d\k\5\3\9\f\p\9\f\e\c\q\4\i\m\8\p\0\d\x\f\n\8\m\s\g\m\b\v\5\9\i\p\q\4\q\q\k\b\d\d\r\u\5\o\u\n\b\j\r\h\1\1\c\i\1\i\t\b\q\e\4\v\y\w\3\h\n\f\e\g\0\7\6\7\1\j\2\a\5\u\o\c\x\h\1\w\0\8\z\i\7\r\i\d\g\7\p\4\s\b\0\v\c\v\6\0\x\q\f\1\f\i\s\a\o\h\p\9\d\p\e\9\4\3\7\w\y\8\d\r\z\y\m\k\t\k\3\e\1\5\z\u\p\h\i\m\y\6\w\z\m\g\2\j\5\8\z\3\v\9\z\6\2\r\6\r\7\w\f\4\y\j\i\t\4\o\2\y\6\3\r\u\f\0\k\0\e\5\h\x\v\c\m\r\3\h\l\z\v\d\7\l\5\9\p\8\6\p\6\a\4\2\z\w\v\u\n\n\c\w\k\t\0\v\s\9\r\n\z\8\i\e\o\a\c\q\c\n\z\8\y\a\6\8\n\q\n\6\3\y\n\d\o\n\j\5\1\s\v\p\d\b\w\3\t\v\k\n\z\v\w\y\t\5\7\y\q\c\j\c\0\5\m\b\6\t\q\f\y\e\y\k\l\q\7\8\i\i\p\9\r\1\h\m\s\a\u\l\7\9\q\t\v\7\a\z\b\v\w\a\z\5\k\g\r\f\q\h\a\u\2\1\h\q\i\f\8\r\8\z\n\u\o\p\b\e\6\c\k\l\l\q\e\u\k\b\8\s\z\r\m\c\8\i\3\4\4\d\o\z\b\h\3 ]] 00:06:42.118 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.118 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.118 [2024-11-26 19:14:40.389236] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:42.118 [2024-11-26 19:14:40.389346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:06:42.118 [2024-11-26 19:14:40.537357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.378 [2024-11-26 19:14:40.586758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.378 [2024-11-26 19:14:40.643834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.378  [2024-11-26T19:14:41.077Z] Copying: 512/512 [B] (average 166 kBps) 00:06:42.637 00:06:42.637 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ym9x1190h6ykksx5xv01cr7pvb5fgb108mf5wkfc2yvohub10dd4exi3bmj6nywob7k1ht22939g52tws1n3u77zn5rv6clhcpvzbu164ubq932nececo41t1wdb9d4ry1uu0jsuo0dk539fp9fecq4im8p0dxfn8msgmbv59ipq4qqkbddru5ounbjrh11ci1itbqe4vyw3hnfeg07671j2a5uocxh1w08zi7ridg7p4sb0vcv60xqf1fisaohp9dpe9437wy8drzymktk3e15zuphimy6wzmg2j58z3v9z62r6r7wf4yjit4o2y63ruf0k0e5hxvcmr3hlzvd7l59p86p6a42zwvunncwkt0vs9rnz8ieoacqcnz8ya68nqn63yndonj51svpdbw3tvknzvwyt57yqcjc05mb6tqfyeyklq78iip9r1hmsaul79qtv7azbvwaz5kgrfqhau21hqif8r8znuopbe6ckllqeukb8szrmc8i344dozbh3 == \y\m\9\x\1\1\9\0\h\6\y\k\k\s\x\5\x\v\0\1\c\r\7\p\v\b\5\f\g\b\1\0\8\m\f\5\w\k\f\c\2\y\v\o\h\u\b\1\0\d\d\4\e\x\i\3\b\m\j\6\n\y\w\o\b\7\k\1\h\t\2\2\9\3\9\g\5\2\t\w\s\1\n\3\u\7\7\z\n\5\r\v\6\c\l\h\c\p\v\z\b\u\1\6\4\u\b\q\9\3\2\n\e\c\e\c\o\4\1\t\1\w\d\b\9\d\4\r\y\1\u\u\0\j\s\u\o\0\d\k\5\3\9\f\p\9\f\e\c\q\4\i\m\8\p\0\d\x\f\n\8\m\s\g\m\b\v\5\9\i\p\q\4\q\q\k\b\d\d\r\u\5\o\u\n\b\j\r\h\1\1\c\i\1\i\t\b\q\e\4\v\y\w\3\h\n\f\e\g\0\7\6\7\1\j\2\a\5\u\o\c\x\h\1\w\0\8\z\i\7\r\i\d\g\7\p\4\s\b\0\v\c\v\6\0\x\q\f\1\f\i\s\a\o\h\p\9\d\p\e\9\4\3\7\w\y\8\d\r\z\y\m\k\t\k\3\e\1\5\z\u\p\h\i\m\y\6\w\z\m\g\2\j\5\8\z\3\v\9\z\6\2\r\6\r\7\w\f\4\y\j\i\t\4\o\2\y\6\3\r\u\f\0\k\0\e\5\h\x\v\c\m\r\3\h\l\z\v\d\7\l\5\9\p\8\6\p\6\a\4\2\z\w\v\u\n\n\c\w\k\t\0\v\s\9\r\n\z\8\i\e\o\a\c\q\c\n\z\8\y\a\6\8\n\q\n\6\3\y\n\d\o\n\j\5\1\s\v\p\d\b\w\3\t\v\k\n\z\v\w\y\t\5\7\y\q\c\j\c\0\5\m\b\6\t\q\f\y\e\y\k\l\q\7\8\i\i\p\9\r\1\h\m\s\a\u\l\7\9\q\t\v\7\a\z\b\v\w\a\z\5\k\g\r\f\q\h\a\u\2\1\h\q\i\f\8\r\8\z\n\u\o\p\b\e\6\c\k\l\l\q\e\u\k\b\8\s\z\r\m\c\8\i\3\4\4\d\o\z\b\h\3 ]] 00:06:42.638 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.638 19:14:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:42.638 [2024-11-26 19:14:40.952816] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:42.638 [2024-11-26 19:14:40.952954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 00:06:42.897 [2024-11-26 19:14:41.098750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.897 [2024-11-26 19:14:41.145587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.897 [2024-11-26 19:14:41.202035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.897  [2024-11-26T19:14:41.595Z] Copying: 512/512 [B] (average 500 kBps) 00:06:43.155 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ym9x1190h6ykksx5xv01cr7pvb5fgb108mf5wkfc2yvohub10dd4exi3bmj6nywob7k1ht22939g52tws1n3u77zn5rv6clhcpvzbu164ubq932nececo41t1wdb9d4ry1uu0jsuo0dk539fp9fecq4im8p0dxfn8msgmbv59ipq4qqkbddru5ounbjrh11ci1itbqe4vyw3hnfeg07671j2a5uocxh1w08zi7ridg7p4sb0vcv60xqf1fisaohp9dpe9437wy8drzymktk3e15zuphimy6wzmg2j58z3v9z62r6r7wf4yjit4o2y63ruf0k0e5hxvcmr3hlzvd7l59p86p6a42zwvunncwkt0vs9rnz8ieoacqcnz8ya68nqn63yndonj51svpdbw3tvknzvwyt57yqcjc05mb6tqfyeyklq78iip9r1hmsaul79qtv7azbvwaz5kgrfqhau21hqif8r8znuopbe6ckllqeukb8szrmc8i344dozbh3 == \y\m\9\x\1\1\9\0\h\6\y\k\k\s\x\5\x\v\0\1\c\r\7\p\v\b\5\f\g\b\1\0\8\m\f\5\w\k\f\c\2\y\v\o\h\u\b\1\0\d\d\4\e\x\i\3\b\m\j\6\n\y\w\o\b\7\k\1\h\t\2\2\9\3\9\g\5\2\t\w\s\1\n\3\u\7\7\z\n\5\r\v\6\c\l\h\c\p\v\z\b\u\1\6\4\u\b\q\9\3\2\n\e\c\e\c\o\4\1\t\1\w\d\b\9\d\4\r\y\1\u\u\0\j\s\u\o\0\d\k\5\3\9\f\p\9\f\e\c\q\4\i\m\8\p\0\d\x\f\n\8\m\s\g\m\b\v\5\9\i\p\q\4\q\q\k\b\d\d\r\u\5\o\u\n\b\j\r\h\1\1\c\i\1\i\t\b\q\e\4\v\y\w\3\h\n\f\e\g\0\7\6\7\1\j\2\a\5\u\o\c\x\h\1\w\0\8\z\i\7\r\i\d\g\7\p\4\s\b\0\v\c\v\6\0\x\q\f\1\f\i\s\a\o\h\p\9\d\p\e\9\4\3\7\w\y\8\d\r\z\y\m\k\t\k\3\e\1\5\z\u\p\h\i\m\y\6\w\z\m\g\2\j\5\8\z\3\v\9\z\6\2\r\6\r\7\w\f\4\y\j\i\t\4\o\2\y\6\3\r\u\f\0\k\0\e\5\h\x\v\c\m\r\3\h\l\z\v\d\7\l\5\9\p\8\6\p\6\a\4\2\z\w\v\u\n\n\c\w\k\t\0\v\s\9\r\n\z\8\i\e\o\a\c\q\c\n\z\8\y\a\6\8\n\q\n\6\3\y\n\d\o\n\j\5\1\s\v\p\d\b\w\3\t\v\k\n\z\v\w\y\t\5\7\y\q\c\j\c\0\5\m\b\6\t\q\f\y\e\y\k\l\q\7\8\i\i\p\9\r\1\h\m\s\a\u\l\7\9\q\t\v\7\a\z\b\v\w\a\z\5\k\g\r\f\q\h\a\u\2\1\h\q\i\f\8\r\8\z\n\u\o\p\b\e\6\c\k\l\l\q\e\u\k\b\8\s\z\r\m\c\8\i\3\4\4\d\o\z\b\h\3 ]] 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.156 19:14:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:43.156 [2024-11-26 19:14:41.507889] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:43.156 [2024-11-26 19:14:41.508018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:06:43.415 [2024-11-26 19:14:41.654455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.415 [2024-11-26 19:14:41.710547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.415 [2024-11-26 19:14:41.771359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.415  [2024-11-26T19:14:42.114Z] Copying: 512/512 [B] (average 500 kBps) 00:06:43.674 00:06:43.674 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ce3wtxnqysgjyjhigaly8943o3je3ysm1tl70gtr897trwaai38pxwi17yukommw8k53e6pcu5w8twzur57lngf6ga3ywsc4cw085xuecblz50pfybf9gg5xlbtzjjroikwqmwphn4vzeuq3udr57b2i9ax7mocdh4pqvbx6jlu9ierdcum8ccrbajt122wc07bj31jb6dw2r0w0p9c76zqj0r9u12rjsy99qa6hx9rwykqlf34d92lvxqy0bsaou2emwhdzaw46w61h9epbuouwyzbzk3bl78xj2mw0yonnazdkyi8hlzodey6k4faozwjdih6xq0ubb9qm57g366f79vw6arux6ph2lje3m6uzk139glwqgcucw6lo12a1ia6xc81kh3akzlhpclm4jaoxp8ehlz19dwjsyobdjopy77okcc1wymzgc6f22tpl92mv3y0dtbvo5b5t3ao6enf978xuotz1pw0cede78pv0vb4zbdzk1su0xhb8nnux == \c\e\3\w\t\x\n\q\y\s\g\j\y\j\h\i\g\a\l\y\8\9\4\3\o\3\j\e\3\y\s\m\1\t\l\7\0\g\t\r\8\9\7\t\r\w\a\a\i\3\8\p\x\w\i\1\7\y\u\k\o\m\m\w\8\k\5\3\e\6\p\c\u\5\w\8\t\w\z\u\r\5\7\l\n\g\f\6\g\a\3\y\w\s\c\4\c\w\0\8\5\x\u\e\c\b\l\z\5\0\p\f\y\b\f\9\g\g\5\x\l\b\t\z\j\j\r\o\i\k\w\q\m\w\p\h\n\4\v\z\e\u\q\3\u\d\r\5\7\b\2\i\9\a\x\7\m\o\c\d\h\4\p\q\v\b\x\6\j\l\u\9\i\e\r\d\c\u\m\8\c\c\r\b\a\j\t\1\2\2\w\c\0\7\b\j\3\1\j\b\6\d\w\2\r\0\w\0\p\9\c\7\6\z\q\j\0\r\9\u\1\2\r\j\s\y\9\9\q\a\6\h\x\9\r\w\y\k\q\l\f\3\4\d\9\2\l\v\x\q\y\0\b\s\a\o\u\2\e\m\w\h\d\z\a\w\4\6\w\6\1\h\9\e\p\b\u\o\u\w\y\z\b\z\k\3\b\l\7\8\x\j\2\m\w\0\y\o\n\n\a\z\d\k\y\i\8\h\l\z\o\d\e\y\6\k\4\f\a\o\z\w\j\d\i\h\6\x\q\0\u\b\b\9\q\m\5\7\g\3\6\6\f\7\9\v\w\6\a\r\u\x\6\p\h\2\l\j\e\3\m\6\u\z\k\1\3\9\g\l\w\q\g\c\u\c\w\6\l\o\1\2\a\1\i\a\6\x\c\8\1\k\h\3\a\k\z\l\h\p\c\l\m\4\j\a\o\x\p\8\e\h\l\z\1\9\d\w\j\s\y\o\b\d\j\o\p\y\7\7\o\k\c\c\1\w\y\m\z\g\c\6\f\2\2\t\p\l\9\2\m\v\3\y\0\d\t\b\v\o\5\b\5\t\3\a\o\6\e\n\f\9\7\8\x\u\o\t\z\1\p\w\0\c\e\d\e\7\8\p\v\0\v\b\4\z\b\d\z\k\1\s\u\0\x\h\b\8\n\n\u\x ]] 00:06:43.674 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.674 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:43.674 [2024-11-26 19:14:42.070034] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:43.674 [2024-11-26 19:14:42.070134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60435 ] 00:06:43.933 [2024-11-26 19:14:42.220591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.933 [2024-11-26 19:14:42.268106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.933 [2024-11-26 19:14:42.329357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.933  [2024-11-26T19:14:42.631Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.191 00:06:44.191 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ce3wtxnqysgjyjhigaly8943o3je3ysm1tl70gtr897trwaai38pxwi17yukommw8k53e6pcu5w8twzur57lngf6ga3ywsc4cw085xuecblz50pfybf9gg5xlbtzjjroikwqmwphn4vzeuq3udr57b2i9ax7mocdh4pqvbx6jlu9ierdcum8ccrbajt122wc07bj31jb6dw2r0w0p9c76zqj0r9u12rjsy99qa6hx9rwykqlf34d92lvxqy0bsaou2emwhdzaw46w61h9epbuouwyzbzk3bl78xj2mw0yonnazdkyi8hlzodey6k4faozwjdih6xq0ubb9qm57g366f79vw6arux6ph2lje3m6uzk139glwqgcucw6lo12a1ia6xc81kh3akzlhpclm4jaoxp8ehlz19dwjsyobdjopy77okcc1wymzgc6f22tpl92mv3y0dtbvo5b5t3ao6enf978xuotz1pw0cede78pv0vb4zbdzk1su0xhb8nnux == \c\e\3\w\t\x\n\q\y\s\g\j\y\j\h\i\g\a\l\y\8\9\4\3\o\3\j\e\3\y\s\m\1\t\l\7\0\g\t\r\8\9\7\t\r\w\a\a\i\3\8\p\x\w\i\1\7\y\u\k\o\m\m\w\8\k\5\3\e\6\p\c\u\5\w\8\t\w\z\u\r\5\7\l\n\g\f\6\g\a\3\y\w\s\c\4\c\w\0\8\5\x\u\e\c\b\l\z\5\0\p\f\y\b\f\9\g\g\5\x\l\b\t\z\j\j\r\o\i\k\w\q\m\w\p\h\n\4\v\z\e\u\q\3\u\d\r\5\7\b\2\i\9\a\x\7\m\o\c\d\h\4\p\q\v\b\x\6\j\l\u\9\i\e\r\d\c\u\m\8\c\c\r\b\a\j\t\1\2\2\w\c\0\7\b\j\3\1\j\b\6\d\w\2\r\0\w\0\p\9\c\7\6\z\q\j\0\r\9\u\1\2\r\j\s\y\9\9\q\a\6\h\x\9\r\w\y\k\q\l\f\3\4\d\9\2\l\v\x\q\y\0\b\s\a\o\u\2\e\m\w\h\d\z\a\w\4\6\w\6\1\h\9\e\p\b\u\o\u\w\y\z\b\z\k\3\b\l\7\8\x\j\2\m\w\0\y\o\n\n\a\z\d\k\y\i\8\h\l\z\o\d\e\y\6\k\4\f\a\o\z\w\j\d\i\h\6\x\q\0\u\b\b\9\q\m\5\7\g\3\6\6\f\7\9\v\w\6\a\r\u\x\6\p\h\2\l\j\e\3\m\6\u\z\k\1\3\9\g\l\w\q\g\c\u\c\w\6\l\o\1\2\a\1\i\a\6\x\c\8\1\k\h\3\a\k\z\l\h\p\c\l\m\4\j\a\o\x\p\8\e\h\l\z\1\9\d\w\j\s\y\o\b\d\j\o\p\y\7\7\o\k\c\c\1\w\y\m\z\g\c\6\f\2\2\t\p\l\9\2\m\v\3\y\0\d\t\b\v\o\5\b\5\t\3\a\o\6\e\n\f\9\7\8\x\u\o\t\z\1\p\w\0\c\e\d\e\7\8\p\v\0\v\b\4\z\b\d\z\k\1\s\u\0\x\h\b\8\n\n\u\x ]] 00:06:44.191 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.191 19:14:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:44.451 [2024-11-26 19:14:42.640305] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:44.451 [2024-11-26 19:14:42.640406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60442 ] 00:06:44.451 [2024-11-26 19:14:42.785819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.451 [2024-11-26 19:14:42.832153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.710 [2024-11-26 19:14:42.889645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.710  [2024-11-26T19:14:43.150Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.710 00:06:44.710 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ce3wtxnqysgjyjhigaly8943o3je3ysm1tl70gtr897trwaai38pxwi17yukommw8k53e6pcu5w8twzur57lngf6ga3ywsc4cw085xuecblz50pfybf9gg5xlbtzjjroikwqmwphn4vzeuq3udr57b2i9ax7mocdh4pqvbx6jlu9ierdcum8ccrbajt122wc07bj31jb6dw2r0w0p9c76zqj0r9u12rjsy99qa6hx9rwykqlf34d92lvxqy0bsaou2emwhdzaw46w61h9epbuouwyzbzk3bl78xj2mw0yonnazdkyi8hlzodey6k4faozwjdih6xq0ubb9qm57g366f79vw6arux6ph2lje3m6uzk139glwqgcucw6lo12a1ia6xc81kh3akzlhpclm4jaoxp8ehlz19dwjsyobdjopy77okcc1wymzgc6f22tpl92mv3y0dtbvo5b5t3ao6enf978xuotz1pw0cede78pv0vb4zbdzk1su0xhb8nnux == \c\e\3\w\t\x\n\q\y\s\g\j\y\j\h\i\g\a\l\y\8\9\4\3\o\3\j\e\3\y\s\m\1\t\l\7\0\g\t\r\8\9\7\t\r\w\a\a\i\3\8\p\x\w\i\1\7\y\u\k\o\m\m\w\8\k\5\3\e\6\p\c\u\5\w\8\t\w\z\u\r\5\7\l\n\g\f\6\g\a\3\y\w\s\c\4\c\w\0\8\5\x\u\e\c\b\l\z\5\0\p\f\y\b\f\9\g\g\5\x\l\b\t\z\j\j\r\o\i\k\w\q\m\w\p\h\n\4\v\z\e\u\q\3\u\d\r\5\7\b\2\i\9\a\x\7\m\o\c\d\h\4\p\q\v\b\x\6\j\l\u\9\i\e\r\d\c\u\m\8\c\c\r\b\a\j\t\1\2\2\w\c\0\7\b\j\3\1\j\b\6\d\w\2\r\0\w\0\p\9\c\7\6\z\q\j\0\r\9\u\1\2\r\j\s\y\9\9\q\a\6\h\x\9\r\w\y\k\q\l\f\3\4\d\9\2\l\v\x\q\y\0\b\s\a\o\u\2\e\m\w\h\d\z\a\w\4\6\w\6\1\h\9\e\p\b\u\o\u\w\y\z\b\z\k\3\b\l\7\8\x\j\2\m\w\0\y\o\n\n\a\z\d\k\y\i\8\h\l\z\o\d\e\y\6\k\4\f\a\o\z\w\j\d\i\h\6\x\q\0\u\b\b\9\q\m\5\7\g\3\6\6\f\7\9\v\w\6\a\r\u\x\6\p\h\2\l\j\e\3\m\6\u\z\k\1\3\9\g\l\w\q\g\c\u\c\w\6\l\o\1\2\a\1\i\a\6\x\c\8\1\k\h\3\a\k\z\l\h\p\c\l\m\4\j\a\o\x\p\8\e\h\l\z\1\9\d\w\j\s\y\o\b\d\j\o\p\y\7\7\o\k\c\c\1\w\y\m\z\g\c\6\f\2\2\t\p\l\9\2\m\v\3\y\0\d\t\b\v\o\5\b\5\t\3\a\o\6\e\n\f\9\7\8\x\u\o\t\z\1\p\w\0\c\e\d\e\7\8\p\v\0\v\b\4\z\b\d\z\k\1\s\u\0\x\h\b\8\n\n\u\x ]] 00:06:44.710 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.710 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:44.970 [2024-11-26 19:14:43.192699] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:44.970 [2024-11-26 19:14:43.192791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:06:44.970 [2024-11-26 19:14:43.342541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.970 [2024-11-26 19:14:43.392063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.229 [2024-11-26 19:14:43.453303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.229  [2024-11-26T19:14:43.927Z] Copying: 512/512 [B] (average 250 kBps) 00:06:45.487 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ce3wtxnqysgjyjhigaly8943o3je3ysm1tl70gtr897trwaai38pxwi17yukommw8k53e6pcu5w8twzur57lngf6ga3ywsc4cw085xuecblz50pfybf9gg5xlbtzjjroikwqmwphn4vzeuq3udr57b2i9ax7mocdh4pqvbx6jlu9ierdcum8ccrbajt122wc07bj31jb6dw2r0w0p9c76zqj0r9u12rjsy99qa6hx9rwykqlf34d92lvxqy0bsaou2emwhdzaw46w61h9epbuouwyzbzk3bl78xj2mw0yonnazdkyi8hlzodey6k4faozwjdih6xq0ubb9qm57g366f79vw6arux6ph2lje3m6uzk139glwqgcucw6lo12a1ia6xc81kh3akzlhpclm4jaoxp8ehlz19dwjsyobdjopy77okcc1wymzgc6f22tpl92mv3y0dtbvo5b5t3ao6enf978xuotz1pw0cede78pv0vb4zbdzk1su0xhb8nnux == \c\e\3\w\t\x\n\q\y\s\g\j\y\j\h\i\g\a\l\y\8\9\4\3\o\3\j\e\3\y\s\m\1\t\l\7\0\g\t\r\8\9\7\t\r\w\a\a\i\3\8\p\x\w\i\1\7\y\u\k\o\m\m\w\8\k\5\3\e\6\p\c\u\5\w\8\t\w\z\u\r\5\7\l\n\g\f\6\g\a\3\y\w\s\c\4\c\w\0\8\5\x\u\e\c\b\l\z\5\0\p\f\y\b\f\9\g\g\5\x\l\b\t\z\j\j\r\o\i\k\w\q\m\w\p\h\n\4\v\z\e\u\q\3\u\d\r\5\7\b\2\i\9\a\x\7\m\o\c\d\h\4\p\q\v\b\x\6\j\l\u\9\i\e\r\d\c\u\m\8\c\c\r\b\a\j\t\1\2\2\w\c\0\7\b\j\3\1\j\b\6\d\w\2\r\0\w\0\p\9\c\7\6\z\q\j\0\r\9\u\1\2\r\j\s\y\9\9\q\a\6\h\x\9\r\w\y\k\q\l\f\3\4\d\9\2\l\v\x\q\y\0\b\s\a\o\u\2\e\m\w\h\d\z\a\w\4\6\w\6\1\h\9\e\p\b\u\o\u\w\y\z\b\z\k\3\b\l\7\8\x\j\2\m\w\0\y\o\n\n\a\z\d\k\y\i\8\h\l\z\o\d\e\y\6\k\4\f\a\o\z\w\j\d\i\h\6\x\q\0\u\b\b\9\q\m\5\7\g\3\6\6\f\7\9\v\w\6\a\r\u\x\6\p\h\2\l\j\e\3\m\6\u\z\k\1\3\9\g\l\w\q\g\c\u\c\w\6\l\o\1\2\a\1\i\a\6\x\c\8\1\k\h\3\a\k\z\l\h\p\c\l\m\4\j\a\o\x\p\8\e\h\l\z\1\9\d\w\j\s\y\o\b\d\j\o\p\y\7\7\o\k\c\c\1\w\y\m\z\g\c\6\f\2\2\t\p\l\9\2\m\v\3\y\0\d\t\b\v\o\5\b\5\t\3\a\o\6\e\n\f\9\7\8\x\u\o\t\z\1\p\w\0\c\e\d\e\7\8\p\v\0\v\b\4\z\b\d\z\k\1\s\u\0\x\h\b\8\n\n\u\x ]] 00:06:45.487 00:06:45.487 real 0m4.498s 00:06:45.487 user 0m2.380s 00:06:45.487 sys 0m1.135s 00:06:45.487 ************************************ 00:06:45.487 END TEST dd_flags_misc_forced_aio 00:06:45.487 ************************************ 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:45.487 00:06:45.487 real 0m19.640s 00:06:45.487 user 0m9.169s 00:06:45.487 sys 0m6.389s 00:06:45.487 ************************************ 00:06:45.487 END TEST spdk_dd_posix 00:06:45.487 ************************************ 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.487 19:14:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.487 19:14:43 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:45.487 19:14:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.487 19:14:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.487 19:14:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.487 ************************************ 00:06:45.487 START TEST spdk_dd_malloc 00:06:45.487 ************************************ 00:06:45.487 19:14:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:45.487 * Looking for test storage... 00:06:45.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.487 19:14:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.487 19:14:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.487 19:14:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.747 19:14:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.747 19:14:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.747 19:14:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.747 19:14:43 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.747 --rc genhtml_branch_coverage=1 00:06:45.747 --rc genhtml_function_coverage=1 00:06:45.747 --rc genhtml_legend=1 00:06:45.747 --rc geninfo_all_blocks=1 00:06:45.747 --rc geninfo_unexecuted_blocks=1 00:06:45.747 00:06:45.747 ' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.747 --rc genhtml_branch_coverage=1 00:06:45.747 --rc genhtml_function_coverage=1 00:06:45.747 --rc genhtml_legend=1 00:06:45.747 --rc geninfo_all_blocks=1 00:06:45.747 --rc geninfo_unexecuted_blocks=1 00:06:45.747 00:06:45.747 ' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.747 --rc genhtml_branch_coverage=1 00:06:45.747 --rc genhtml_function_coverage=1 00:06:45.747 --rc genhtml_legend=1 00:06:45.747 --rc geninfo_all_blocks=1 00:06:45.747 --rc geninfo_unexecuted_blocks=1 00:06:45.747 00:06:45.747 ' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.747 --rc genhtml_branch_coverage=1 00:06:45.747 --rc genhtml_function_coverage=1 00:06:45.747 --rc genhtml_legend=1 00:06:45.747 --rc geninfo_all_blocks=1 00:06:45.747 --rc geninfo_unexecuted_blocks=1 00:06:45.747 00:06:45.747 ' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 ************************************ 00:06:45.747 START TEST dd_malloc_copy 00:06:45.747 ************************************ 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.747 19:14:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 [2024-11-26 19:14:44.084146] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:45.747 [2024-11-26 19:14:44.084351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:06:45.747 { 00:06:45.747 "subsystems": [ 00:06:45.747 { 00:06:45.747 "subsystem": "bdev", 00:06:45.747 "config": [ 00:06:45.747 { 00:06:45.747 "params": { 00:06:45.747 "block_size": 512, 00:06:45.747 "num_blocks": 1048576, 00:06:45.747 "name": "malloc0" 00:06:45.747 }, 00:06:45.747 "method": "bdev_malloc_create" 00:06:45.747 }, 00:06:45.747 { 00:06:45.747 "params": { 00:06:45.747 "block_size": 512, 00:06:45.747 "num_blocks": 1048576, 00:06:45.747 "name": "malloc1" 00:06:45.747 }, 00:06:45.747 "method": "bdev_malloc_create" 00:06:45.747 }, 00:06:45.747 { 00:06:45.747 "method": "bdev_wait_for_examine" 00:06:45.747 } 00:06:45.747 ] 00:06:45.747 } 00:06:45.747 ] 00:06:45.747 } 00:06:46.006 [2024-11-26 19:14:44.231452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.007 [2024-11-26 19:14:44.284361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.007 [2024-11-26 19:14:44.344868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.386  [2024-11-26T19:14:46.764Z] Copying: 202/512 [MB] (202 MBps) [2024-11-26T19:14:47.331Z] Copying: 428/512 [MB] (226 MBps) [2024-11-26T19:14:47.900Z] Copying: 512/512 [MB] (average 214 MBps) 00:06:49.460 00:06:49.460 19:14:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:49.460 19:14:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:49.460 19:14:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:49.460 19:14:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.460 [2024-11-26 19:14:47.748393] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:49.460 [2024-11-26 19:14:47.748511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:06:49.460 { 00:06:49.460 "subsystems": [ 00:06:49.460 { 00:06:49.460 "subsystem": "bdev", 00:06:49.460 "config": [ 00:06:49.460 { 00:06:49.460 "params": { 00:06:49.460 "block_size": 512, 00:06:49.460 "num_blocks": 1048576, 00:06:49.460 "name": "malloc0" 00:06:49.460 }, 00:06:49.460 "method": "bdev_malloc_create" 00:06:49.460 }, 00:06:49.460 { 00:06:49.460 "params": { 00:06:49.460 "block_size": 512, 00:06:49.460 "num_blocks": 1048576, 00:06:49.460 "name": "malloc1" 00:06:49.460 }, 00:06:49.460 "method": "bdev_malloc_create" 00:06:49.460 }, 00:06:49.460 { 00:06:49.460 "method": "bdev_wait_for_examine" 00:06:49.460 } 00:06:49.460 ] 00:06:49.460 } 00:06:49.460 ] 00:06:49.460 } 00:06:49.460 [2024-11-26 19:14:47.895096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.718 [2024-11-26 19:14:47.943540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.718 [2024-11-26 19:14:48.001320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.098  [2024-11-26T19:14:50.514Z] Copying: 233/512 [MB] (233 MBps) [2024-11-26T19:14:50.782Z] Copying: 459/512 [MB] (226 MBps) [2024-11-26T19:14:51.351Z] Copying: 512/512 [MB] (average 228 MBps) 00:06:52.911 00:06:52.911 00:06:52.911 real 0m7.143s 00:06:52.911 user 0m6.095s 00:06:52.911 sys 0m0.901s 00:06:52.911 19:14:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.911 19:14:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.911 ************************************ 00:06:52.911 END TEST dd_malloc_copy 00:06:52.911 ************************************ 00:06:52.911 ************************************ 00:06:52.911 END TEST spdk_dd_malloc 00:06:52.911 ************************************ 00:06:52.911 00:06:52.911 real 0m7.395s 00:06:52.911 user 0m6.235s 00:06:52.911 sys 0m1.013s 00:06:52.911 19:14:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.911 19:14:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:52.911 19:14:51 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.911 19:14:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:52.911 19:14:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.911 19:14:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.911 ************************************ 00:06:52.911 START TEST spdk_dd_bdev_to_bdev 00:06:52.911 ************************************ 00:06:52.911 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:53.171 * Looking for test storage... 00:06:53.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.171 --rc genhtml_branch_coverage=1 00:06:53.171 --rc genhtml_function_coverage=1 00:06:53.171 --rc genhtml_legend=1 00:06:53.171 --rc geninfo_all_blocks=1 00:06:53.171 --rc geninfo_unexecuted_blocks=1 00:06:53.171 00:06:53.171 ' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.171 --rc genhtml_branch_coverage=1 00:06:53.171 --rc genhtml_function_coverage=1 00:06:53.171 --rc genhtml_legend=1 00:06:53.171 --rc geninfo_all_blocks=1 00:06:53.171 --rc geninfo_unexecuted_blocks=1 00:06:53.171 00:06:53.171 ' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.171 --rc genhtml_branch_coverage=1 00:06:53.171 --rc genhtml_function_coverage=1 00:06:53.171 --rc genhtml_legend=1 00:06:53.171 --rc geninfo_all_blocks=1 00:06:53.171 --rc geninfo_unexecuted_blocks=1 00:06:53.171 00:06:53.171 ' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.171 --rc genhtml_branch_coverage=1 00:06:53.171 --rc genhtml_function_coverage=1 00:06:53.171 --rc genhtml_legend=1 00:06:53.171 --rc geninfo_all_blocks=1 00:06:53.171 --rc geninfo_unexecuted_blocks=1 00:06:53.171 00:06:53.171 ' 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.171 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:53.172 ************************************ 00:06:53.172 START TEST dd_inflate_file 00:06:53.172 ************************************ 00:06:53.172 19:14:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:53.172 [2024-11-26 19:14:51.542637] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:53.172 [2024-11-26 19:14:51.542993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60695 ] 00:06:53.431 [2024-11-26 19:14:51.684033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.431 [2024-11-26 19:14:51.745296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.431 [2024-11-26 19:14:51.802555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.775  [2024-11-26T19:14:52.215Z] Copying: 64/64 [MB] (average 1488 MBps) 00:06:53.775 00:06:53.775 00:06:53.775 real 0m0.583s 00:06:53.775 user 0m0.332s 00:06:53.775 sys 0m0.316s 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:53.775 ************************************ 00:06:53.775 END TEST dd_inflate_file 00:06:53.775 ************************************ 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:53.775 ************************************ 00:06:53.775 START TEST dd_copy_to_out_bdev 00:06:53.775 ************************************ 00:06:53.775 19:14:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:53.775 [2024-11-26 19:14:52.192234] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:53.775 [2024-11-26 19:14:52.192561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60734 ] 00:06:53.775 { 00:06:53.775 "subsystems": [ 00:06:53.775 { 00:06:53.775 "subsystem": "bdev", 00:06:53.775 "config": [ 00:06:53.775 { 00:06:53.775 "params": { 00:06:53.775 "trtype": "pcie", 00:06:53.775 "traddr": "0000:00:10.0", 00:06:53.775 "name": "Nvme0" 00:06:53.775 }, 00:06:53.775 "method": "bdev_nvme_attach_controller" 00:06:53.775 }, 00:06:53.775 { 00:06:53.775 "params": { 00:06:53.775 "trtype": "pcie", 00:06:53.775 "traddr": "0000:00:11.0", 00:06:53.775 "name": "Nvme1" 00:06:53.775 }, 00:06:53.775 "method": "bdev_nvme_attach_controller" 00:06:53.775 }, 00:06:53.775 { 00:06:53.775 "method": "bdev_wait_for_examine" 00:06:53.775 } 00:06:53.775 ] 00:06:53.775 } 00:06:53.775 ] 00:06:53.775 } 00:06:54.035 [2024-11-26 19:14:52.342369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.035 [2024-11-26 19:14:52.412921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.295 [2024-11-26 19:14:52.475291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.230  [2024-11-26T19:14:53.929Z] Copying: 55/64 [MB] (55 MBps) [2024-11-26T19:14:54.188Z] Copying: 64/64 [MB] (average 54 MBps) 00:06:55.748 00:06:55.748 00:06:55.748 real 0m1.906s 00:06:55.748 user 0m1.681s 00:06:55.748 sys 0m1.519s 00:06:55.748 ************************************ 00:06:55.748 END TEST dd_copy_to_out_bdev 00:06:55.748 ************************************ 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.748 ************************************ 00:06:55.748 START TEST dd_offset_magic 00:06:55.748 ************************************ 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:55.748 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:55.749 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:55.749 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:55.749 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:55.749 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:55.749 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:55.749 [2024-11-26 19:14:54.151292] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:55.749 [2024-11-26 19:14:54.151563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:06:55.749 { 00:06:55.749 "subsystems": [ 00:06:55.749 { 00:06:55.749 "subsystem": "bdev", 00:06:55.749 "config": [ 00:06:55.749 { 00:06:55.749 "params": { 00:06:55.749 "trtype": "pcie", 00:06:55.749 "traddr": "0000:00:10.0", 00:06:55.749 "name": "Nvme0" 00:06:55.749 }, 00:06:55.749 "method": "bdev_nvme_attach_controller" 00:06:55.749 }, 00:06:55.749 { 00:06:55.749 "params": { 00:06:55.749 "trtype": "pcie", 00:06:55.749 "traddr": "0000:00:11.0", 00:06:55.749 "name": "Nvme1" 00:06:55.749 }, 00:06:55.749 "method": "bdev_nvme_attach_controller" 00:06:55.749 }, 00:06:55.749 { 00:06:55.749 "method": "bdev_wait_for_examine" 00:06:55.749 } 00:06:55.749 ] 00:06:55.749 } 00:06:55.749 ] 00:06:55.749 } 00:06:56.008 [2024-11-26 19:14:54.291461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.008 [2024-11-26 19:14:54.333472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.008 [2024-11-26 19:14:54.389564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.267  [2024-11-26T19:14:54.965Z] Copying: 65/65 [MB] (average 783 MBps) 00:06:56.525 00:06:56.525 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:56.525 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:56.525 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:56.525 19:14:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:56.525 [2024-11-26 19:14:54.941424] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:56.525 [2024-11-26 19:14:54.941544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60793 ] 00:06:56.525 { 00:06:56.525 "subsystems": [ 00:06:56.525 { 00:06:56.525 "subsystem": "bdev", 00:06:56.525 "config": [ 00:06:56.525 { 00:06:56.525 "params": { 00:06:56.525 "trtype": "pcie", 00:06:56.525 "traddr": "0000:00:10.0", 00:06:56.525 "name": "Nvme0" 00:06:56.525 }, 00:06:56.525 "method": "bdev_nvme_attach_controller" 00:06:56.525 }, 00:06:56.525 { 00:06:56.525 "params": { 00:06:56.525 "trtype": "pcie", 00:06:56.525 "traddr": "0000:00:11.0", 00:06:56.525 "name": "Nvme1" 00:06:56.525 }, 00:06:56.525 "method": "bdev_nvme_attach_controller" 00:06:56.525 }, 00:06:56.525 { 00:06:56.525 "method": "bdev_wait_for_examine" 00:06:56.525 } 00:06:56.525 ] 00:06:56.525 } 00:06:56.525 ] 00:06:56.525 } 00:06:56.784 [2024-11-26 19:14:55.088492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.784 [2024-11-26 19:14:55.131459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.784 [2024-11-26 19:14:55.188082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.043  [2024-11-26T19:14:55.742Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:57.302 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:57.302 19:14:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.302 [2024-11-26 19:14:55.600837] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:57.302 [2024-11-26 19:14:55.600966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:06:57.302 { 00:06:57.302 "subsystems": [ 00:06:57.302 { 00:06:57.302 "subsystem": "bdev", 00:06:57.302 "config": [ 00:06:57.302 { 00:06:57.302 "params": { 00:06:57.302 "trtype": "pcie", 00:06:57.302 "traddr": "0000:00:10.0", 00:06:57.302 "name": "Nvme0" 00:06:57.302 }, 00:06:57.302 "method": "bdev_nvme_attach_controller" 00:06:57.302 }, 00:06:57.302 { 00:06:57.302 "params": { 00:06:57.302 "trtype": "pcie", 00:06:57.302 "traddr": "0000:00:11.0", 00:06:57.302 "name": "Nvme1" 00:06:57.302 }, 00:06:57.302 "method": "bdev_nvme_attach_controller" 00:06:57.302 }, 00:06:57.302 { 00:06:57.302 "method": "bdev_wait_for_examine" 00:06:57.302 } 00:06:57.302 ] 00:06:57.302 } 00:06:57.302 ] 00:06:57.302 } 00:06:57.560 [2024-11-26 19:14:55.741984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.560 [2024-11-26 19:14:55.799781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.560 [2024-11-26 19:14:55.855925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.819  [2024-11-26T19:14:56.518Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:58.078 00:06:58.078 19:14:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:58.078 19:14:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:58.078 19:14:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:58.078 19:14:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:58.078 { 00:06:58.078 "subsystems": [ 00:06:58.078 { 00:06:58.078 "subsystem": "bdev", 00:06:58.078 "config": [ 00:06:58.078 { 00:06:58.078 "params": { 00:06:58.078 "trtype": "pcie", 00:06:58.078 "traddr": "0000:00:10.0", 00:06:58.078 "name": "Nvme0" 00:06:58.078 }, 00:06:58.078 "method": "bdev_nvme_attach_controller" 00:06:58.078 }, 00:06:58.078 { 00:06:58.078 "params": { 00:06:58.078 "trtype": "pcie", 00:06:58.078 "traddr": "0000:00:11.0", 00:06:58.078 "name": "Nvme1" 00:06:58.078 }, 00:06:58.078 "method": "bdev_nvme_attach_controller" 00:06:58.078 }, 00:06:58.078 { 00:06:58.078 "method": "bdev_wait_for_examine" 00:06:58.078 } 00:06:58.078 ] 00:06:58.078 } 00:06:58.078 ] 00:06:58.078 } 00:06:58.078 [2024-11-26 19:14:56.386653] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:58.078 [2024-11-26 19:14:56.386758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60830 ] 00:06:58.337 [2024-11-26 19:14:56.532483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.337 [2024-11-26 19:14:56.589338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.337 [2024-11-26 19:14:56.642202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.596  [2024-11-26T19:14:57.036Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:58.596 00:06:58.596 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:58.596 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:58.596 00:06:58.596 real 0m2.922s 00:06:58.596 user 0m2.122s 00:06:58.596 sys 0m0.884s 00:06:58.596 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.596 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:58.596 ************************************ 00:06:58.596 END TEST dd_offset_magic 00:06:58.596 ************************************ 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.855 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.855 [2024-11-26 19:14:57.122210] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:58.855 [2024-11-26 19:14:57.122318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60867 ] 00:06:58.855 { 00:06:58.855 "subsystems": [ 00:06:58.855 { 00:06:58.855 "subsystem": "bdev", 00:06:58.855 "config": [ 00:06:58.855 { 00:06:58.855 "params": { 00:06:58.855 "trtype": "pcie", 00:06:58.855 "traddr": "0000:00:10.0", 00:06:58.855 "name": "Nvme0" 00:06:58.855 }, 00:06:58.855 "method": "bdev_nvme_attach_controller" 00:06:58.855 }, 00:06:58.855 { 00:06:58.855 "params": { 00:06:58.855 "trtype": "pcie", 00:06:58.855 "traddr": "0000:00:11.0", 00:06:58.855 "name": "Nvme1" 00:06:58.855 }, 00:06:58.855 "method": "bdev_nvme_attach_controller" 00:06:58.855 }, 00:06:58.855 { 00:06:58.855 "method": "bdev_wait_for_examine" 00:06:58.855 } 00:06:58.855 ] 00:06:58.855 } 00:06:58.855 ] 00:06:58.855 } 00:06:58.855 [2024-11-26 19:14:57.269846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.114 [2024-11-26 19:14:57.317334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.114 [2024-11-26 19:14:57.378828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.374  [2024-11-26T19:14:57.814Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:59.374 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:59.374 19:14:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.374 [2024-11-26 19:14:57.810131] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:06:59.374 [2024-11-26 19:14:57.810252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:06:59.374 { 00:06:59.374 "subsystems": [ 00:06:59.374 { 00:06:59.374 "subsystem": "bdev", 00:06:59.374 "config": [ 00:06:59.374 { 00:06:59.374 "params": { 00:06:59.374 "trtype": "pcie", 00:06:59.374 "traddr": "0000:00:10.0", 00:06:59.374 "name": "Nvme0" 00:06:59.374 }, 00:06:59.374 "method": "bdev_nvme_attach_controller" 00:06:59.374 }, 00:06:59.374 { 00:06:59.374 "params": { 00:06:59.374 "trtype": "pcie", 00:06:59.374 "traddr": "0000:00:11.0", 00:06:59.374 "name": "Nvme1" 00:06:59.374 }, 00:06:59.374 "method": "bdev_nvme_attach_controller" 00:06:59.374 }, 00:06:59.374 { 00:06:59.374 "method": "bdev_wait_for_examine" 00:06:59.374 } 00:06:59.374 ] 00:06:59.374 } 00:06:59.374 ] 00:06:59.374 } 00:06:59.633 [2024-11-26 19:14:57.961922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.633 [2024-11-26 19:14:58.030908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.893 [2024-11-26 19:14:58.092832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.893  [2024-11-26T19:14:58.592Z] Copying: 5120/5120 [kB] (average 625 MBps) 00:07:00.152 00:07:00.152 19:14:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:00.152 00:07:00.152 real 0m7.215s 00:07:00.152 user 0m5.268s 00:07:00.152 sys 0m3.492s 00:07:00.152 19:14:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.152 19:14:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.152 ************************************ 00:07:00.152 END TEST spdk_dd_bdev_to_bdev 00:07:00.152 ************************************ 00:07:00.152 19:14:58 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:00.152 19:14:58 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:00.152 19:14:58 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.152 19:14:58 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.152 19:14:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:00.152 ************************************ 00:07:00.152 START TEST spdk_dd_uring 00:07:00.152 ************************************ 00:07:00.152 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:00.411 * Looking for test storage... 00:07:00.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.411 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.411 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.412 --rc genhtml_branch_coverage=1 00:07:00.412 --rc genhtml_function_coverage=1 00:07:00.412 --rc genhtml_legend=1 00:07:00.412 --rc geninfo_all_blocks=1 00:07:00.412 --rc geninfo_unexecuted_blocks=1 00:07:00.412 00:07:00.412 ' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:00.412 ************************************ 00:07:00.412 START TEST dd_uring_copy 00:07:00.412 ************************************ 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.412 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=2urswz7djku99r5xcsfs8r66gvrhoioziwltd3im0huyelasclx938ztssepha5xlfqpz9hisr6wxq6augcnvh7gdl1tz3omhomnaj3drf7h8w1hmjdakx9ag4awe9c1t2grzp01uqoyl52qwwvph6dsfbmthf2rt56ks3zfxhmm0v9ovq4rxidiu57i34j2g99wjd2gwptx4bdy29gkcia4cusjrxpeiwbwupl5yboq0j534galdwptszcwe0jhc8dau8sqzzsefgra3k8tzjjf8hlqm8n319dkf5rpl8ii2w6zvp0vqtsxn60yd9s6d58ln45ynlsuoso2pr3b3v7pg7uoeiewpipabuieoxlnsy5pag9x5bvp6tup3oc259y9804975eewnuojqj97po968txhfk48alenc446bjkukl7pgbc9xr7mdpmn81p79m5dxo9ipjqob078oxzbeed2w6bh418cjldpxyx4tcjw6ffoygzyk4eyj1epxubuoy6ok3akt4jg2rk0ighzfph3gie6f02cbuyh7avlxljnk194zg6427kcpvuqbbdv1hadj0mbrm0worvviu421ch1j17qqe8bvl2nvqjyatqitmpr6xbvxez4ta7wpr4hm1y4bce36h23pk98gw3sn63slqv95xeoi0ephqlo463uhrxnv59etjipfipqbtxzvatlw1srr3o98uog7f6q12zfroonv2r2oig32txrx09m9e8wxird33qenxo0whzztcys4i6c1f6jo7ds19gx1pycf6zjyrdvjqhqc1er5uoazvlebqmkzr4s95ftuaxh9a336kklqp8vo68h0hcj8eggh6ufpphsqf4fs2q2pcgg2veuw0vej1bitwzk9sffhkar7mvf846lqpmghlb6bxt9z020ax182jq434rssljundyvvwyxnbp87jm4pjoxc11w9p0g05uv58wkhb6aussh2ir0ru2uhpu997ffmo26er8issymd5zuxu2fgtj 00:07:00.413 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 2urswz7djku99r5xcsfs8r66gvrhoioziwltd3im0huyelasclx938ztssepha5xlfqpz9hisr6wxq6augcnvh7gdl1tz3omhomnaj3drf7h8w1hmjdakx9ag4awe9c1t2grzp01uqoyl52qwwvph6dsfbmthf2rt56ks3zfxhmm0v9ovq4rxidiu57i34j2g99wjd2gwptx4bdy29gkcia4cusjrxpeiwbwupl5yboq0j534galdwptszcwe0jhc8dau8sqzzsefgra3k8tzjjf8hlqm8n319dkf5rpl8ii2w6zvp0vqtsxn60yd9s6d58ln45ynlsuoso2pr3b3v7pg7uoeiewpipabuieoxlnsy5pag9x5bvp6tup3oc259y9804975eewnuojqj97po968txhfk48alenc446bjkukl7pgbc9xr7mdpmn81p79m5dxo9ipjqob078oxzbeed2w6bh418cjldpxyx4tcjw6ffoygzyk4eyj1epxubuoy6ok3akt4jg2rk0ighzfph3gie6f02cbuyh7avlxljnk194zg6427kcpvuqbbdv1hadj0mbrm0worvviu421ch1j17qqe8bvl2nvqjyatqitmpr6xbvxez4ta7wpr4hm1y4bce36h23pk98gw3sn63slqv95xeoi0ephqlo463uhrxnv59etjipfipqbtxzvatlw1srr3o98uog7f6q12zfroonv2r2oig32txrx09m9e8wxird33qenxo0whzztcys4i6c1f6jo7ds19gx1pycf6zjyrdvjqhqc1er5uoazvlebqmkzr4s95ftuaxh9a336kklqp8vo68h0hcj8eggh6ufpphsqf4fs2q2pcgg2veuw0vej1bitwzk9sffhkar7mvf846lqpmghlb6bxt9z020ax182jq434rssljundyvvwyxnbp87jm4pjoxc11w9p0g05uv58wkhb6aussh2ir0ru2uhpu997ffmo26er8issymd5zuxu2fgtj 00:07:00.413 19:14:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:00.413 [2024-11-26 19:14:58.842391] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:00.413 [2024-11-26 19:14:58.842549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:07:00.672 [2024-11-26 19:14:58.987732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.672 [2024-11-26 19:14:59.034142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.672 [2024-11-26 19:14:59.092103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.608  [2024-11-26T19:15:00.307Z] Copying: 511/511 [MB] (average 1073 MBps) 00:07:01.867 00:07:01.867 19:15:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:01.867 19:15:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:01.867 19:15:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.867 19:15:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 [2024-11-26 19:15:00.216061] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:01.867 [2024-11-26 19:15:00.216178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60985 ] 00:07:01.867 { 00:07:01.867 "subsystems": [ 00:07:01.867 { 00:07:01.867 "subsystem": "bdev", 00:07:01.867 "config": [ 00:07:01.867 { 00:07:01.867 "params": { 00:07:01.867 "block_size": 512, 00:07:01.867 "num_blocks": 1048576, 00:07:01.867 "name": "malloc0" 00:07:01.867 }, 00:07:01.867 "method": "bdev_malloc_create" 00:07:01.867 }, 00:07:01.867 { 00:07:01.867 "params": { 00:07:01.867 "filename": "/dev/zram1", 00:07:01.867 "name": "uring0" 00:07:01.867 }, 00:07:01.867 "method": "bdev_uring_create" 00:07:01.867 }, 00:07:01.867 { 00:07:01.867 "method": "bdev_wait_for_examine" 00:07:01.867 } 00:07:01.867 ] 00:07:01.867 } 00:07:01.867 ] 00:07:01.867 } 00:07:02.126 [2024-11-26 19:15:00.354578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.126 [2024-11-26 19:15:00.398163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.126 [2024-11-26 19:15:00.458814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.504  [2024-11-26T19:15:02.882Z] Copying: 245/512 [MB] (245 MBps) [2024-11-26T19:15:02.882Z] Copying: 489/512 [MB] (243 MBps) [2024-11-26T19:15:03.192Z] Copying: 512/512 [MB] (average 244 MBps) 00:07:04.752 00:07:04.752 19:15:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:04.752 19:15:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:04.752 19:15:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:04.752 19:15:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.032 [2024-11-26 19:15:03.210700] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:05.032 [2024-11-26 19:15:03.210793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:07:05.032 { 00:07:05.032 "subsystems": [ 00:07:05.032 { 00:07:05.032 "subsystem": "bdev", 00:07:05.032 "config": [ 00:07:05.032 { 00:07:05.032 "params": { 00:07:05.032 "block_size": 512, 00:07:05.032 "num_blocks": 1048576, 00:07:05.032 "name": "malloc0" 00:07:05.032 }, 00:07:05.032 "method": "bdev_malloc_create" 00:07:05.032 }, 00:07:05.032 { 00:07:05.032 "params": { 00:07:05.032 "filename": "/dev/zram1", 00:07:05.032 "name": "uring0" 00:07:05.032 }, 00:07:05.032 "method": "bdev_uring_create" 00:07:05.032 }, 00:07:05.032 { 00:07:05.032 "method": "bdev_wait_for_examine" 00:07:05.032 } 00:07:05.032 ] 00:07:05.032 } 00:07:05.032 ] 00:07:05.032 } 00:07:05.032 [2024-11-26 19:15:03.354615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.032 [2024-11-26 19:15:03.397834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.032 [2024-11-26 19:15:03.453934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.410  [2024-11-26T19:15:05.789Z] Copying: 184/512 [MB] (184 MBps) [2024-11-26T19:15:06.728Z] Copying: 359/512 [MB] (174 MBps) [2024-11-26T19:15:06.988Z] Copying: 512/512 [MB] (average 184 MBps) 00:07:08.548 00:07:08.548 19:15:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:08.548 19:15:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 2urswz7djku99r5xcsfs8r66gvrhoioziwltd3im0huyelasclx938ztssepha5xlfqpz9hisr6wxq6augcnvh7gdl1tz3omhomnaj3drf7h8w1hmjdakx9ag4awe9c1t2grzp01uqoyl52qwwvph6dsfbmthf2rt56ks3zfxhmm0v9ovq4rxidiu57i34j2g99wjd2gwptx4bdy29gkcia4cusjrxpeiwbwupl5yboq0j534galdwptszcwe0jhc8dau8sqzzsefgra3k8tzjjf8hlqm8n319dkf5rpl8ii2w6zvp0vqtsxn60yd9s6d58ln45ynlsuoso2pr3b3v7pg7uoeiewpipabuieoxlnsy5pag9x5bvp6tup3oc259y9804975eewnuojqj97po968txhfk48alenc446bjkukl7pgbc9xr7mdpmn81p79m5dxo9ipjqob078oxzbeed2w6bh418cjldpxyx4tcjw6ffoygzyk4eyj1epxubuoy6ok3akt4jg2rk0ighzfph3gie6f02cbuyh7avlxljnk194zg6427kcpvuqbbdv1hadj0mbrm0worvviu421ch1j17qqe8bvl2nvqjyatqitmpr6xbvxez4ta7wpr4hm1y4bce36h23pk98gw3sn63slqv95xeoi0ephqlo463uhrxnv59etjipfipqbtxzvatlw1srr3o98uog7f6q12zfroonv2r2oig32txrx09m9e8wxird33qenxo0whzztcys4i6c1f6jo7ds19gx1pycf6zjyrdvjqhqc1er5uoazvlebqmkzr4s95ftuaxh9a336kklqp8vo68h0hcj8eggh6ufpphsqf4fs2q2pcgg2veuw0vej1bitwzk9sffhkar7mvf846lqpmghlb6bxt9z020ax182jq434rssljundyvvwyxnbp87jm4pjoxc11w9p0g05uv58wkhb6aussh2ir0ru2uhpu997ffmo26er8issymd5zuxu2fgtj == \2\u\r\s\w\z\7\d\j\k\u\9\9\r\5\x\c\s\f\s\8\r\6\6\g\v\r\h\o\i\o\z\i\w\l\t\d\3\i\m\0\h\u\y\e\l\a\s\c\l\x\9\3\8\z\t\s\s\e\p\h\a\5\x\l\f\q\p\z\9\h\i\s\r\6\w\x\q\6\a\u\g\c\n\v\h\7\g\d\l\1\t\z\3\o\m\h\o\m\n\a\j\3\d\r\f\7\h\8\w\1\h\m\j\d\a\k\x\9\a\g\4\a\w\e\9\c\1\t\2\g\r\z\p\0\1\u\q\o\y\l\5\2\q\w\w\v\p\h\6\d\s\f\b\m\t\h\f\2\r\t\5\6\k\s\3\z\f\x\h\m\m\0\v\9\o\v\q\4\r\x\i\d\i\u\5\7\i\3\4\j\2\g\9\9\w\j\d\2\g\w\p\t\x\4\b\d\y\2\9\g\k\c\i\a\4\c\u\s\j\r\x\p\e\i\w\b\w\u\p\l\5\y\b\o\q\0\j\5\3\4\g\a\l\d\w\p\t\s\z\c\w\e\0\j\h\c\8\d\a\u\8\s\q\z\z\s\e\f\g\r\a\3\k\8\t\z\j\j\f\8\h\l\q\m\8\n\3\1\9\d\k\f\5\r\p\l\8\i\i\2\w\6\z\v\p\0\v\q\t\s\x\n\6\0\y\d\9\s\6\d\5\8\l\n\4\5\y\n\l\s\u\o\s\o\2\p\r\3\b\3\v\7\p\g\7\u\o\e\i\e\w\p\i\p\a\b\u\i\e\o\x\l\n\s\y\5\p\a\g\9\x\5\b\v\p\6\t\u\p\3\o\c\2\5\9\y\9\8\0\4\9\7\5\e\e\w\n\u\o\j\q\j\9\7\p\o\9\6\8\t\x\h\f\k\4\8\a\l\e\n\c\4\4\6\b\j\k\u\k\l\7\p\g\b\c\9\x\r\7\m\d\p\m\n\8\1\p\7\9\m\5\d\x\o\9\i\p\j\q\o\b\0\7\8\o\x\z\b\e\e\d\2\w\6\b\h\4\1\8\c\j\l\d\p\x\y\x\4\t\c\j\w\6\f\f\o\y\g\z\y\k\4\e\y\j\1\e\p\x\u\b\u\o\y\6\o\k\3\a\k\t\4\j\g\2\r\k\0\i\g\h\z\f\p\h\3\g\i\e\6\f\0\2\c\b\u\y\h\7\a\v\l\x\l\j\n\k\1\9\4\z\g\6\4\2\7\k\c\p\v\u\q\b\b\d\v\1\h\a\d\j\0\m\b\r\m\0\w\o\r\v\v\i\u\4\2\1\c\h\1\j\1\7\q\q\e\8\b\v\l\2\n\v\q\j\y\a\t\q\i\t\m\p\r\6\x\b\v\x\e\z\4\t\a\7\w\p\r\4\h\m\1\y\4\b\c\e\3\6\h\2\3\p\k\9\8\g\w\3\s\n\6\3\s\l\q\v\9\5\x\e\o\i\0\e\p\h\q\l\o\4\6\3\u\h\r\x\n\v\5\9\e\t\j\i\p\f\i\p\q\b\t\x\z\v\a\t\l\w\1\s\r\r\3\o\9\8\u\o\g\7\f\6\q\1\2\z\f\r\o\o\n\v\2\r\2\o\i\g\3\2\t\x\r\x\0\9\m\9\e\8\w\x\i\r\d\3\3\q\e\n\x\o\0\w\h\z\z\t\c\y\s\4\i\6\c\1\f\6\j\o\7\d\s\1\9\g\x\1\p\y\c\f\6\z\j\y\r\d\v\j\q\h\q\c\1\e\r\5\u\o\a\z\v\l\e\b\q\m\k\z\r\4\s\9\5\f\t\u\a\x\h\9\a\3\3\6\k\k\l\q\p\8\v\o\6\8\h\0\h\c\j\8\e\g\g\h\6\u\f\p\p\h\s\q\f\4\f\s\2\q\2\p\c\g\g\2\v\e\u\w\0\v\e\j\1\b\i\t\w\z\k\9\s\f\f\h\k\a\r\7\m\v\f\8\4\6\l\q\p\m\g\h\l\b\6\b\x\t\9\z\0\2\0\a\x\1\8\2\j\q\4\3\4\r\s\s\l\j\u\n\d\y\v\v\w\y\x\n\b\p\8\7\j\m\4\p\j\o\x\c\1\1\w\9\p\0\g\0\5\u\v\5\8\w\k\h\b\6\a\u\s\s\h\2\i\r\0\r\u\2\u\h\p\u\9\9\7\f\f\m\o\2\6\e\r\8\i\s\s\y\m\d\5\z\u\x\u\2\f\g\t\j ]] 00:07:08.548 19:15:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:08.548 19:15:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 2urswz7djku99r5xcsfs8r66gvrhoioziwltd3im0huyelasclx938ztssepha5xlfqpz9hisr6wxq6augcnvh7gdl1tz3omhomnaj3drf7h8w1hmjdakx9ag4awe9c1t2grzp01uqoyl52qwwvph6dsfbmthf2rt56ks3zfxhmm0v9ovq4rxidiu57i34j2g99wjd2gwptx4bdy29gkcia4cusjrxpeiwbwupl5yboq0j534galdwptszcwe0jhc8dau8sqzzsefgra3k8tzjjf8hlqm8n319dkf5rpl8ii2w6zvp0vqtsxn60yd9s6d58ln45ynlsuoso2pr3b3v7pg7uoeiewpipabuieoxlnsy5pag9x5bvp6tup3oc259y9804975eewnuojqj97po968txhfk48alenc446bjkukl7pgbc9xr7mdpmn81p79m5dxo9ipjqob078oxzbeed2w6bh418cjldpxyx4tcjw6ffoygzyk4eyj1epxubuoy6ok3akt4jg2rk0ighzfph3gie6f02cbuyh7avlxljnk194zg6427kcpvuqbbdv1hadj0mbrm0worvviu421ch1j17qqe8bvl2nvqjyatqitmpr6xbvxez4ta7wpr4hm1y4bce36h23pk98gw3sn63slqv95xeoi0ephqlo463uhrxnv59etjipfipqbtxzvatlw1srr3o98uog7f6q12zfroonv2r2oig32txrx09m9e8wxird33qenxo0whzztcys4i6c1f6jo7ds19gx1pycf6zjyrdvjqhqc1er5uoazvlebqmkzr4s95ftuaxh9a336kklqp8vo68h0hcj8eggh6ufpphsqf4fs2q2pcgg2veuw0vej1bitwzk9sffhkar7mvf846lqpmghlb6bxt9z020ax182jq434rssljundyvvwyxnbp87jm4pjoxc11w9p0g05uv58wkhb6aussh2ir0ru2uhpu997ffmo26er8issymd5zuxu2fgtj == \2\u\r\s\w\z\7\d\j\k\u\9\9\r\5\x\c\s\f\s\8\r\6\6\g\v\r\h\o\i\o\z\i\w\l\t\d\3\i\m\0\h\u\y\e\l\a\s\c\l\x\9\3\8\z\t\s\s\e\p\h\a\5\x\l\f\q\p\z\9\h\i\s\r\6\w\x\q\6\a\u\g\c\n\v\h\7\g\d\l\1\t\z\3\o\m\h\o\m\n\a\j\3\d\r\f\7\h\8\w\1\h\m\j\d\a\k\x\9\a\g\4\a\w\e\9\c\1\t\2\g\r\z\p\0\1\u\q\o\y\l\5\2\q\w\w\v\p\h\6\d\s\f\b\m\t\h\f\2\r\t\5\6\k\s\3\z\f\x\h\m\m\0\v\9\o\v\q\4\r\x\i\d\i\u\5\7\i\3\4\j\2\g\9\9\w\j\d\2\g\w\p\t\x\4\b\d\y\2\9\g\k\c\i\a\4\c\u\s\j\r\x\p\e\i\w\b\w\u\p\l\5\y\b\o\q\0\j\5\3\4\g\a\l\d\w\p\t\s\z\c\w\e\0\j\h\c\8\d\a\u\8\s\q\z\z\s\e\f\g\r\a\3\k\8\t\z\j\j\f\8\h\l\q\m\8\n\3\1\9\d\k\f\5\r\p\l\8\i\i\2\w\6\z\v\p\0\v\q\t\s\x\n\6\0\y\d\9\s\6\d\5\8\l\n\4\5\y\n\l\s\u\o\s\o\2\p\r\3\b\3\v\7\p\g\7\u\o\e\i\e\w\p\i\p\a\b\u\i\e\o\x\l\n\s\y\5\p\a\g\9\x\5\b\v\p\6\t\u\p\3\o\c\2\5\9\y\9\8\0\4\9\7\5\e\e\w\n\u\o\j\q\j\9\7\p\o\9\6\8\t\x\h\f\k\4\8\a\l\e\n\c\4\4\6\b\j\k\u\k\l\7\p\g\b\c\9\x\r\7\m\d\p\m\n\8\1\p\7\9\m\5\d\x\o\9\i\p\j\q\o\b\0\7\8\o\x\z\b\e\e\d\2\w\6\b\h\4\1\8\c\j\l\d\p\x\y\x\4\t\c\j\w\6\f\f\o\y\g\z\y\k\4\e\y\j\1\e\p\x\u\b\u\o\y\6\o\k\3\a\k\t\4\j\g\2\r\k\0\i\g\h\z\f\p\h\3\g\i\e\6\f\0\2\c\b\u\y\h\7\a\v\l\x\l\j\n\k\1\9\4\z\g\6\4\2\7\k\c\p\v\u\q\b\b\d\v\1\h\a\d\j\0\m\b\r\m\0\w\o\r\v\v\i\u\4\2\1\c\h\1\j\1\7\q\q\e\8\b\v\l\2\n\v\q\j\y\a\t\q\i\t\m\p\r\6\x\b\v\x\e\z\4\t\a\7\w\p\r\4\h\m\1\y\4\b\c\e\3\6\h\2\3\p\k\9\8\g\w\3\s\n\6\3\s\l\q\v\9\5\x\e\o\i\0\e\p\h\q\l\o\4\6\3\u\h\r\x\n\v\5\9\e\t\j\i\p\f\i\p\q\b\t\x\z\v\a\t\l\w\1\s\r\r\3\o\9\8\u\o\g\7\f\6\q\1\2\z\f\r\o\o\n\v\2\r\2\o\i\g\3\2\t\x\r\x\0\9\m\9\e\8\w\x\i\r\d\3\3\q\e\n\x\o\0\w\h\z\z\t\c\y\s\4\i\6\c\1\f\6\j\o\7\d\s\1\9\g\x\1\p\y\c\f\6\z\j\y\r\d\v\j\q\h\q\c\1\e\r\5\u\o\a\z\v\l\e\b\q\m\k\z\r\4\s\9\5\f\t\u\a\x\h\9\a\3\3\6\k\k\l\q\p\8\v\o\6\8\h\0\h\c\j\8\e\g\g\h\6\u\f\p\p\h\s\q\f\4\f\s\2\q\2\p\c\g\g\2\v\e\u\w\0\v\e\j\1\b\i\t\w\z\k\9\s\f\f\h\k\a\r\7\m\v\f\8\4\6\l\q\p\m\g\h\l\b\6\b\x\t\9\z\0\2\0\a\x\1\8\2\j\q\4\3\4\r\s\s\l\j\u\n\d\y\v\v\w\y\x\n\b\p\8\7\j\m\4\p\j\o\x\c\1\1\w\9\p\0\g\0\5\u\v\5\8\w\k\h\b\6\a\u\s\s\h\2\i\r\0\r\u\2\u\h\p\u\9\9\7\f\f\m\o\2\6\e\r\8\i\s\s\y\m\d\5\z\u\x\u\2\f\g\t\j ]] 00:07:08.548 19:15:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:08.808 19:15:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:08.808 19:15:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:08.808 19:15:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:08.808 19:15:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.808 [2024-11-26 19:15:07.198665] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:08.808 [2024-11-26 19:15:07.198773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61079 ] 00:07:08.808 { 00:07:08.808 "subsystems": [ 00:07:08.808 { 00:07:08.808 "subsystem": "bdev", 00:07:08.808 "config": [ 00:07:08.808 { 00:07:08.808 "params": { 00:07:08.808 "block_size": 512, 00:07:08.808 "num_blocks": 1048576, 00:07:08.808 "name": "malloc0" 00:07:08.808 }, 00:07:08.808 "method": "bdev_malloc_create" 00:07:08.808 }, 00:07:08.808 { 00:07:08.808 "params": { 00:07:08.808 "filename": "/dev/zram1", 00:07:08.808 "name": "uring0" 00:07:08.808 }, 00:07:08.808 "method": "bdev_uring_create" 00:07:08.808 }, 00:07:08.808 { 00:07:08.808 "method": "bdev_wait_for_examine" 00:07:08.808 } 00:07:08.808 ] 00:07:08.808 } 00:07:08.808 ] 00:07:08.808 } 00:07:09.068 [2024-11-26 19:15:07.354441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.068 [2024-11-26 19:15:07.398760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.068 [2024-11-26 19:15:07.457810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.453  [2024-11-26T19:15:09.896Z] Copying: 170/512 [MB] (170 MBps) [2024-11-26T19:15:10.833Z] Copying: 342/512 [MB] (172 MBps) [2024-11-26T19:15:10.833Z] Copying: 509/512 [MB] (166 MBps) [2024-11-26T19:15:11.402Z] Copying: 512/512 [MB] (average 169 MBps) 00:07:12.962 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:12.962 19:15:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.962 [2024-11-26 19:15:11.150824] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:12.962 [2024-11-26 19:15:11.150952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61141 ] 00:07:12.962 { 00:07:12.962 "subsystems": [ 00:07:12.962 { 00:07:12.962 "subsystem": "bdev", 00:07:12.962 "config": [ 00:07:12.962 { 00:07:12.962 "params": { 00:07:12.962 "block_size": 512, 00:07:12.962 "num_blocks": 1048576, 00:07:12.962 "name": "malloc0" 00:07:12.962 }, 00:07:12.962 "method": "bdev_malloc_create" 00:07:12.962 }, 00:07:12.962 { 00:07:12.962 "params": { 00:07:12.962 "filename": "/dev/zram1", 00:07:12.962 "name": "uring0" 00:07:12.962 }, 00:07:12.962 "method": "bdev_uring_create" 00:07:12.962 }, 00:07:12.962 { 00:07:12.962 "params": { 00:07:12.962 "name": "uring0" 00:07:12.962 }, 00:07:12.962 "method": "bdev_uring_delete" 00:07:12.962 }, 00:07:12.962 { 00:07:12.962 "method": "bdev_wait_for_examine" 00:07:12.962 } 00:07:12.962 ] 00:07:12.962 } 00:07:12.962 ] 00:07:12.962 } 00:07:12.962 [2024-11-26 19:15:11.291157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.962 [2024-11-26 19:15:11.340307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.962 [2024-11-26 19:15:11.399494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.221  [2024-11-26T19:15:12.229Z] Copying: 0/0 [B] (average 0 Bps) 00:07:13.789 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.789 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.789 [2024-11-26 19:15:12.077867] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:13.789 [2024-11-26 19:15:12.077976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:07:13.789 { 00:07:13.789 "subsystems": [ 00:07:13.789 { 00:07:13.789 "subsystem": "bdev", 00:07:13.789 "config": [ 00:07:13.789 { 00:07:13.789 "params": { 00:07:13.789 "block_size": 512, 00:07:13.789 "num_blocks": 1048576, 00:07:13.789 "name": "malloc0" 00:07:13.789 }, 00:07:13.789 "method": "bdev_malloc_create" 00:07:13.789 }, 00:07:13.789 { 00:07:13.789 "params": { 00:07:13.789 "filename": "/dev/zram1", 00:07:13.789 "name": "uring0" 00:07:13.789 }, 00:07:13.789 "method": "bdev_uring_create" 00:07:13.789 }, 00:07:13.789 { 00:07:13.790 "params": { 00:07:13.790 "name": "uring0" 00:07:13.790 }, 00:07:13.790 "method": "bdev_uring_delete" 00:07:13.790 }, 00:07:13.790 { 00:07:13.790 "method": "bdev_wait_for_examine" 00:07:13.790 } 00:07:13.790 ] 00:07:13.790 } 00:07:13.790 ] 00:07:13.790 } 00:07:13.790 [2024-11-26 19:15:12.224518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.049 [2024-11-26 19:15:12.268540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.049 [2024-11-26 19:15:12.323370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.308 [2024-11-26 19:15:12.542223] bdev.c:8418:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:14.308 [2024-11-26 19:15:12.542287] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:14.308 [2024-11-26 19:15:12.542313] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:14.308 [2024-11-26 19:15:12.542337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.566 [2024-11-26 19:15:12.873511] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:14.566 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:14.567 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:14.567 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:14.567 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:14.567 19:15:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:15.135 00:07:15.135 real 0m14.520s 00:07:15.135 user 0m9.643s 00:07:15.135 sys 0m12.237s 00:07:15.135 19:15:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.135 19:15:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.135 ************************************ 00:07:15.135 END TEST dd_uring_copy 00:07:15.135 ************************************ 00:07:15.135 00:07:15.135 real 0m14.771s 00:07:15.135 user 0m9.786s 00:07:15.135 sys 0m12.349s 00:07:15.135 19:15:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.135 19:15:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:15.135 ************************************ 00:07:15.135 END TEST spdk_dd_uring 00:07:15.135 ************************************ 00:07:15.135 19:15:13 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.135 19:15:13 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.135 19:15:13 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.135 19:15:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.135 ************************************ 00:07:15.135 START TEST spdk_dd_sparse 00:07:15.135 ************************************ 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.135 * Looking for test storage... 00:07:15.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.135 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.135 --rc genhtml_branch_coverage=1 00:07:15.135 --rc genhtml_function_coverage=1 00:07:15.135 --rc genhtml_legend=1 00:07:15.135 --rc geninfo_all_blocks=1 00:07:15.135 --rc geninfo_unexecuted_blocks=1 00:07:15.135 00:07:15.136 ' 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.136 --rc genhtml_branch_coverage=1 00:07:15.136 --rc genhtml_function_coverage=1 00:07:15.136 --rc genhtml_legend=1 00:07:15.136 --rc geninfo_all_blocks=1 00:07:15.136 --rc geninfo_unexecuted_blocks=1 00:07:15.136 00:07:15.136 ' 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.136 --rc genhtml_branch_coverage=1 00:07:15.136 --rc genhtml_function_coverage=1 00:07:15.136 --rc genhtml_legend=1 00:07:15.136 --rc geninfo_all_blocks=1 00:07:15.136 --rc geninfo_unexecuted_blocks=1 00:07:15.136 00:07:15.136 ' 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.136 --rc genhtml_branch_coverage=1 00:07:15.136 --rc genhtml_function_coverage=1 00:07:15.136 --rc genhtml_legend=1 00:07:15.136 --rc geninfo_all_blocks=1 00:07:15.136 --rc geninfo_unexecuted_blocks=1 00:07:15.136 00:07:15.136 ' 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:15.136 1+0 records in 00:07:15.136 1+0 records out 00:07:15.136 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00753942 s, 556 MB/s 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:15.136 1+0 records in 00:07:15.136 1+0 records out 00:07:15.136 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00779204 s, 538 MB/s 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:15.136 1+0 records in 00:07:15.136 1+0 records out 00:07:15.136 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00656413 s, 639 MB/s 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.136 19:15:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:15.395 ************************************ 00:07:15.395 START TEST dd_sparse_file_to_file 00:07:15.395 ************************************ 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:15.395 19:15:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:15.395 [2024-11-26 19:15:13.638865] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:15.395 [2024-11-26 19:15:13.639382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:07:15.395 { 00:07:15.395 "subsystems": [ 00:07:15.395 { 00:07:15.395 "subsystem": "bdev", 00:07:15.395 "config": [ 00:07:15.395 { 00:07:15.395 "params": { 00:07:15.395 "block_size": 4096, 00:07:15.395 "filename": "dd_sparse_aio_disk", 00:07:15.395 "name": "dd_aio" 00:07:15.395 }, 00:07:15.395 "method": "bdev_aio_create" 00:07:15.395 }, 00:07:15.395 { 00:07:15.395 "params": { 00:07:15.395 "lvs_name": "dd_lvstore", 00:07:15.395 "bdev_name": "dd_aio" 00:07:15.395 }, 00:07:15.395 "method": "bdev_lvol_create_lvstore" 00:07:15.395 }, 00:07:15.395 { 00:07:15.395 "method": "bdev_wait_for_examine" 00:07:15.395 } 00:07:15.395 ] 00:07:15.395 } 00:07:15.395 ] 00:07:15.395 } 00:07:15.395 [2024-11-26 19:15:13.793190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.655 [2024-11-26 19:15:13.849371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.655 [2024-11-26 19:15:13.908762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.655  [2024-11-26T19:15:14.354Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:15.914 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:15.914 00:07:15.914 real 0m0.681s 00:07:15.914 user 0m0.415s 00:07:15.914 sys 0m0.381s 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.914 ************************************ 00:07:15.914 END TEST dd_sparse_file_to_file 00:07:15.914 ************************************ 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:15.914 ************************************ 00:07:15.914 START TEST dd_sparse_file_to_bdev 00:07:15.914 ************************************ 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:15.914 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:15.915 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:15.915 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.175 [2024-11-26 19:15:14.358804] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:16.175 [2024-11-26 19:15:14.359135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:07:16.175 { 00:07:16.175 "subsystems": [ 00:07:16.175 { 00:07:16.175 "subsystem": "bdev", 00:07:16.175 "config": [ 00:07:16.175 { 00:07:16.175 "params": { 00:07:16.175 "block_size": 4096, 00:07:16.175 "filename": "dd_sparse_aio_disk", 00:07:16.175 "name": "dd_aio" 00:07:16.175 }, 00:07:16.175 "method": "bdev_aio_create" 00:07:16.175 }, 00:07:16.175 { 00:07:16.175 "params": { 00:07:16.175 "lvs_name": "dd_lvstore", 00:07:16.175 "lvol_name": "dd_lvol", 00:07:16.175 "size_in_mib": 36, 00:07:16.175 "thin_provision": true 00:07:16.175 }, 00:07:16.175 "method": "bdev_lvol_create" 00:07:16.175 }, 00:07:16.175 { 00:07:16.175 "method": "bdev_wait_for_examine" 00:07:16.175 } 00:07:16.175 ] 00:07:16.175 } 00:07:16.175 ] 00:07:16.175 } 00:07:16.175 [2024-11-26 19:15:14.503752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.175 [2024-11-26 19:15:14.554560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.175 [2024-11-26 19:15:14.612194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.435  [2024-11-26T19:15:15.134Z] Copying: 12/36 [MB] (average 500 MBps) 00:07:16.694 00:07:16.694 ************************************ 00:07:16.694 END TEST dd_sparse_file_to_bdev 00:07:16.694 ************************************ 00:07:16.694 00:07:16.694 real 0m0.635s 00:07:16.694 user 0m0.392s 00:07:16.694 sys 0m0.366s 00:07:16.694 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.694 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 19:15:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:16.694 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:16.695 ************************************ 00:07:16.695 START TEST dd_sparse_bdev_to_file 00:07:16.695 ************************************ 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:16.695 19:15:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:16.695 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:16.695 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:16.695 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:16.695 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.695 { 00:07:16.695 "subsystems": [ 00:07:16.695 { 00:07:16.695 "subsystem": "bdev", 00:07:16.695 "config": [ 00:07:16.695 { 00:07:16.695 "params": { 00:07:16.695 "block_size": 4096, 00:07:16.695 "filename": "dd_sparse_aio_disk", 00:07:16.695 "name": "dd_aio" 00:07:16.695 }, 00:07:16.695 "method": "bdev_aio_create" 00:07:16.695 }, 00:07:16.695 { 00:07:16.695 "method": "bdev_wait_for_examine" 00:07:16.695 } 00:07:16.695 ] 00:07:16.695 } 00:07:16.695 ] 00:07:16.695 } 00:07:16.695 [2024-11-26 19:15:15.059041] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:16.695 [2024-11-26 19:15:15.059158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61352 ] 00:07:16.954 [2024-11-26 19:15:15.202749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.954 [2024-11-26 19:15:15.249700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.954 [2024-11-26 19:15:15.305941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.954  [2024-11-26T19:15:15.653Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:17.213 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:17.213 ************************************ 00:07:17.213 END TEST dd_sparse_bdev_to_file 00:07:17.213 ************************************ 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:17.213 00:07:17.213 real 0m0.648s 00:07:17.213 user 0m0.402s 00:07:17.213 sys 0m0.365s 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.213 19:15:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:17.472 ************************************ 00:07:17.472 END TEST spdk_dd_sparse 00:07:17.472 ************************************ 00:07:17.472 00:07:17.472 real 0m2.357s 00:07:17.472 user 0m1.382s 00:07:17.472 sys 0m1.319s 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.472 19:15:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 19:15:15 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:17.472 19:15:15 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.472 19:15:15 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.472 19:15:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 ************************************ 00:07:17.472 START TEST spdk_dd_negative 00:07:17.472 ************************************ 00:07:17.472 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:17.472 * Looking for test storage... 00:07:17.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.472 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.472 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.472 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.732 --rc genhtml_branch_coverage=1 00:07:17.732 --rc genhtml_function_coverage=1 00:07:17.732 --rc genhtml_legend=1 00:07:17.732 --rc geninfo_all_blocks=1 00:07:17.732 --rc geninfo_unexecuted_blocks=1 00:07:17.732 00:07:17.732 ' 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.732 --rc genhtml_branch_coverage=1 00:07:17.732 --rc genhtml_function_coverage=1 00:07:17.732 --rc genhtml_legend=1 00:07:17.732 --rc geninfo_all_blocks=1 00:07:17.732 --rc geninfo_unexecuted_blocks=1 00:07:17.732 00:07:17.732 ' 00:07:17.732 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.732 --rc genhtml_branch_coverage=1 00:07:17.732 --rc genhtml_function_coverage=1 00:07:17.732 --rc genhtml_legend=1 00:07:17.732 --rc geninfo_all_blocks=1 00:07:17.732 --rc geninfo_unexecuted_blocks=1 00:07:17.732 00:07:17.732 ' 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.733 --rc genhtml_branch_coverage=1 00:07:17.733 --rc genhtml_function_coverage=1 00:07:17.733 --rc genhtml_legend=1 00:07:17.733 --rc geninfo_all_blocks=1 00:07:17.733 --rc geninfo_unexecuted_blocks=1 00:07:17.733 00:07:17.733 ' 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 ************************************ 00:07:17.733 START TEST dd_invalid_arguments 00:07:17.733 ************************************ 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.733 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.733 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.733 19:15:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.733 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.733 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.733 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.733 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:17.733 00:07:17.733 CPU options: 00:07:17.733 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:17.733 (like [0,1,10]) 00:07:17.733 --lcores lcore to CPU mapping list. The list is in the format: 00:07:17.733 [<,lcores[@CPUs]>...] 00:07:17.733 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:17.733 Within the group, '-' is used for range separator, 00:07:17.733 ',' is used for single number separator. 00:07:17.733 '( )' can be omitted for single element group, 00:07:17.733 '@' can be omitted if cpus and lcores have the same value 00:07:17.733 --disable-cpumask-locks Disable CPU core lock files. 00:07:17.733 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:17.733 pollers in the app support interrupt mode) 00:07:17.733 -p, --main-core main (primary) core for DPDK 00:07:17.733 00:07:17.733 Configuration options: 00:07:17.733 -c, --config, --json JSON config file 00:07:17.733 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:17.733 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:17.733 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:17.733 --rpcs-allowed comma-separated list of permitted RPCS 00:07:17.733 --json-ignore-init-errors don't exit on invalid config entry 00:07:17.733 00:07:17.733 Memory options: 00:07:17.733 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:17.733 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:17.733 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:17.733 -R, --huge-unlink unlink huge files after initialization 00:07:17.733 -n, --mem-channels number of memory channels used for DPDK 00:07:17.733 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:17.733 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:17.733 --no-huge run without using hugepages 00:07:17.733 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:17.733 -i, --shm-id shared memory ID (optional) 00:07:17.733 -g, --single-file-segments force creating just one hugetlbfs file 00:07:17.733 00:07:17.733 PCI options: 00:07:17.733 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:17.733 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:17.733 -u, --no-pci disable PCI access 00:07:17.733 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:17.733 00:07:17.733 Log options: 00:07:17.733 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:17.733 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:17.733 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:17.733 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:17.733 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:17.733 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:17.733 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:17.733 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:17.733 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:17.733 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:17.733 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:17.733 --silence-noticelog disable notice level logging to stderr 00:07:17.733 00:07:17.733 Trace options: 00:07:17.733 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:17.733 setting 0 to disable trace (default 32768) 00:07:17.733 Tracepoints vary in size and can use more than one trace entry. 00:07:17.733 -e, --tpoint-group [:] 00:07:17.733 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:17.733 [2024-11-26 19:15:16.053056] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:17.733 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:17.733 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:17.733 bdev_raid, scheduler, all). 00:07:17.733 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:17.733 a tracepoint group. First tpoint inside a group can be enabled by 00:07:17.733 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:17.733 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:17.733 in /include/spdk_internal/trace_defs.h 00:07:17.733 00:07:17.733 Other options: 00:07:17.733 -h, --help show this usage 00:07:17.733 -v, --version print SPDK version 00:07:17.734 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:17.734 --env-context Opaque context for use of the env implementation 00:07:17.734 00:07:17.734 Application specific: 00:07:17.734 [--------- DD Options ---------] 00:07:17.734 --if Input file. Must specify either --if or --ib. 00:07:17.734 --ib Input bdev. Must specifier either --if or --ib 00:07:17.734 --of Output file. Must specify either --of or --ob. 00:07:17.734 --ob Output bdev. Must specify either --of or --ob. 00:07:17.734 --iflag Input file flags. 00:07:17.734 --oflag Output file flags. 00:07:17.734 --bs I/O unit size (default: 4096) 00:07:17.734 --qd Queue depth (default: 2) 00:07:17.734 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:17.734 --skip Skip this many I/O units at start of input. (default: 0) 00:07:17.734 --seek Skip this many I/O units at start of output. (default: 0) 00:07:17.734 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:17.734 --sparse Enable hole skipping in input target 00:07:17.734 Available iflag and oflag values: 00:07:17.734 append - append mode 00:07:17.734 direct - use direct I/O for data 00:07:17.734 directory - fail unless a directory 00:07:17.734 dsync - use synchronized I/O for data 00:07:17.734 noatime - do not update access time 00:07:17.734 noctty - do not assign controlling terminal from file 00:07:17.734 nofollow - do not follow symlinks 00:07:17.734 nonblock - use non-blocking I/O 00:07:17.734 sync - use synchronized I/O for data and metadata 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.734 00:07:17.734 real 0m0.072s 00:07:17.734 user 0m0.037s 00:07:17.734 sys 0m0.032s 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:17.734 ************************************ 00:07:17.734 END TEST dd_invalid_arguments 00:07:17.734 ************************************ 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.734 ************************************ 00:07:17.734 START TEST dd_double_input 00:07:17.734 ************************************ 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.734 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.993 [2024-11-26 19:15:16.197865] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:17.993 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:17.993 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.993 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.993 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.993 00:07:17.993 real 0m0.083s 00:07:17.993 user 0m0.061s 00:07:17.993 sys 0m0.020s 00:07:17.993 ************************************ 00:07:17.994 END TEST dd_double_input 00:07:17.994 ************************************ 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 ************************************ 00:07:17.994 START TEST dd_double_output 00:07:17.994 ************************************ 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:17.994 [2024-11-26 19:15:16.332994] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:17.994 ************************************ 00:07:17.994 END TEST dd_double_output 00:07:17.994 ************************************ 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.994 00:07:17.994 real 0m0.082s 00:07:17.994 user 0m0.052s 00:07:17.994 sys 0m0.029s 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 ************************************ 00:07:17.994 START TEST dd_no_input 00:07:17.994 ************************************ 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.994 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:18.254 [2024-11-26 19:15:16.464152] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.254 ************************************ 00:07:18.254 END TEST dd_no_input 00:07:18.254 00:07:18.254 real 0m0.070s 00:07:18.254 user 0m0.043s 00:07:18.254 sys 0m0.025s 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:18.254 ************************************ 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.254 ************************************ 00:07:18.254 START TEST dd_no_output 00:07:18.254 ************************************ 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.254 [2024-11-26 19:15:16.583465] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.254 ************************************ 00:07:18.254 END TEST dd_no_output 00:07:18.254 ************************************ 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.254 00:07:18.254 real 0m0.077s 00:07:18.254 user 0m0.054s 00:07:18.254 sys 0m0.022s 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.254 ************************************ 00:07:18.254 START TEST dd_wrong_blocksize 00:07:18.254 ************************************ 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.254 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.255 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.514 [2024-11-26 19:15:16.716456] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.514 00:07:18.514 real 0m0.081s 00:07:18.514 user 0m0.049s 00:07:18.514 sys 0m0.031s 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.514 ************************************ 00:07:18.514 END TEST dd_wrong_blocksize 00:07:18.514 ************************************ 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.514 ************************************ 00:07:18.514 START TEST dd_smaller_blocksize 00:07:18.514 ************************************ 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.514 19:15:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.514 [2024-11-26 19:15:16.855175] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:18.514 [2024-11-26 19:15:16.855262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61579 ] 00:07:18.774 [2024-11-26 19:15:17.009182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.774 [2024-11-26 19:15:17.074552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.774 [2024-11-26 19:15:17.137601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.037 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:19.607 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:19.607 [2024-11-26 19:15:17.776963] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:19.607 [2024-11-26 19:15:17.777049] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.607 [2024-11-26 19:15:17.912261] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.607 00:07:19.607 real 0m1.190s 00:07:19.607 user 0m0.430s 00:07:19.607 sys 0m0.651s 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.607 ************************************ 00:07:19.607 END TEST dd_smaller_blocksize 00:07:19.607 ************************************ 00:07:19.607 19:15:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.607 ************************************ 00:07:19.607 START TEST dd_invalid_count 00:07:19.607 ************************************ 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.607 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.867 [2024-11-26 19:15:18.096268] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.867 00:07:19.867 real 0m0.080s 00:07:19.867 user 0m0.049s 00:07:19.867 sys 0m0.029s 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.867 ************************************ 00:07:19.867 END TEST dd_invalid_count 00:07:19.867 ************************************ 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.867 ************************************ 00:07:19.867 START TEST dd_invalid_oflag 00:07:19.867 ************************************ 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.867 [2024-11-26 19:15:18.224356] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:19.867 ************************************ 00:07:19.867 END TEST dd_invalid_oflag 00:07:19.867 ************************************ 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.867 00:07:19.867 real 0m0.078s 00:07:19.867 user 0m0.053s 00:07:19.867 sys 0m0.023s 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.867 ************************************ 00:07:19.867 START TEST dd_invalid_iflag 00:07:19.867 ************************************ 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.867 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.868 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.126 [2024-11-26 19:15:18.353953] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:20.126 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:20.126 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.126 ************************************ 00:07:20.127 END TEST dd_invalid_iflag 00:07:20.127 ************************************ 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.127 00:07:20.127 real 0m0.079s 00:07:20.127 user 0m0.051s 00:07:20.127 sys 0m0.027s 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.127 ************************************ 00:07:20.127 START TEST dd_unknown_flag 00:07:20.127 ************************************ 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.127 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.127 [2024-11-26 19:15:18.478960] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:20.127 [2024-11-26 19:15:18.479053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61676 ] 00:07:20.386 [2024-11-26 19:15:18.626667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.386 [2024-11-26 19:15:18.679812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.386 [2024-11-26 19:15:18.738653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.386 [2024-11-26 19:15:18.773800] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:20.386 [2024-11-26 19:15:18.773893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.386 [2024-11-26 19:15:18.773976] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:20.386 [2024-11-26 19:15:18.773991] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.386 [2024-11-26 19:15:18.774245] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:20.386 [2024-11-26 19:15:18.774265] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.386 [2024-11-26 19:15:18.774320] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:20.386 [2024-11-26 19:15:18.774344] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:20.646 [2024-11-26 19:15:18.899706] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.646 00:07:20.646 real 0m0.564s 00:07:20.646 user 0m0.317s 00:07:20.646 sys 0m0.152s 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.646 19:15:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:20.646 ************************************ 00:07:20.646 END TEST dd_unknown_flag 00:07:20.646 ************************************ 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.646 ************************************ 00:07:20.646 START TEST dd_invalid_json 00:07:20.646 ************************************ 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.646 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.905 [2024-11-26 19:15:19.099619] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:20.905 [2024-11-26 19:15:19.099866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61710 ] 00:07:20.905 [2024-11-26 19:15:19.244211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.905 [2024-11-26 19:15:19.288688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.905 [2024-11-26 19:15:19.288799] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:20.905 [2024-11-26 19:15:19.288815] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.905 [2024-11-26 19:15:19.288824] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.905 [2024-11-26 19:15:19.288859] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.164 00:07:21.164 real 0m0.319s 00:07:21.164 user 0m0.157s 00:07:21.164 sys 0m0.060s 00:07:21.164 ************************************ 00:07:21.164 END TEST dd_invalid_json 00:07:21.164 ************************************ 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.164 ************************************ 00:07:21.164 START TEST dd_invalid_seek 00:07:21.164 ************************************ 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.164 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.165 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:21.165 [2024-11-26 19:15:19.472858] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:21.165 [2024-11-26 19:15:19.473171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61734 ] 00:07:21.165 { 00:07:21.165 "subsystems": [ 00:07:21.165 { 00:07:21.165 "subsystem": "bdev", 00:07:21.165 "config": [ 00:07:21.165 { 00:07:21.165 "params": { 00:07:21.165 "block_size": 512, 00:07:21.165 "num_blocks": 512, 00:07:21.165 "name": "malloc0" 00:07:21.165 }, 00:07:21.165 "method": "bdev_malloc_create" 00:07:21.165 }, 00:07:21.165 { 00:07:21.165 "params": { 00:07:21.165 "block_size": 512, 00:07:21.165 "num_blocks": 512, 00:07:21.165 "name": "malloc1" 00:07:21.165 }, 00:07:21.165 "method": "bdev_malloc_create" 00:07:21.165 }, 00:07:21.165 { 00:07:21.165 "method": "bdev_wait_for_examine" 00:07:21.165 } 00:07:21.165 ] 00:07:21.165 } 00:07:21.165 ] 00:07:21.165 } 00:07:21.424 [2024-11-26 19:15:19.618406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.424 [2024-11-26 19:15:19.668854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.424 [2024-11-26 19:15:19.727183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.424 [2024-11-26 19:15:19.793116] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:21.424 [2024-11-26 19:15:19.793426] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.684 [2024-11-26 19:15:19.922510] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.685 ************************************ 00:07:21.685 END TEST dd_invalid_seek 00:07:21.685 ************************************ 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.685 00:07:21.685 real 0m0.570s 00:07:21.685 user 0m0.363s 00:07:21.685 sys 0m0.169s 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.685 19:15:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 ************************************ 00:07:21.685 START TEST dd_invalid_skip 00:07:21.685 ************************************ 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.685 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:21.685 { 00:07:21.685 "subsystems": [ 00:07:21.685 { 00:07:21.685 "subsystem": "bdev", 00:07:21.685 "config": [ 00:07:21.685 { 00:07:21.685 "params": { 00:07:21.685 "block_size": 512, 00:07:21.685 "num_blocks": 512, 00:07:21.685 "name": "malloc0" 00:07:21.685 }, 00:07:21.685 "method": "bdev_malloc_create" 00:07:21.685 }, 00:07:21.685 { 00:07:21.685 "params": { 00:07:21.685 "block_size": 512, 00:07:21.685 "num_blocks": 512, 00:07:21.685 "name": "malloc1" 00:07:21.685 }, 00:07:21.685 "method": "bdev_malloc_create" 00:07:21.685 }, 00:07:21.685 { 00:07:21.685 "method": "bdev_wait_for_examine" 00:07:21.685 } 00:07:21.685 ] 00:07:21.685 } 00:07:21.685 ] 00:07:21.685 } 00:07:21.685 [2024-11-26 19:15:20.087223] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:21.685 [2024-11-26 19:15:20.087330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61773 ] 00:07:21.944 [2024-11-26 19:15:20.233637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.945 [2024-11-26 19:15:20.278724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.945 [2024-11-26 19:15:20.336342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.204 [2024-11-26 19:15:20.401234] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:22.204 [2024-11-26 19:15:20.401314] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.204 [2024-11-26 19:15:20.528879] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.204 00:07:22.204 real 0m0.578s 00:07:22.204 user 0m0.377s 00:07:22.204 sys 0m0.160s 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.204 ************************************ 00:07:22.204 END TEST dd_invalid_skip 00:07:22.204 ************************************ 00:07:22.204 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.463 ************************************ 00:07:22.463 START TEST dd_invalid_input_count 00:07:22.463 ************************************ 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.463 19:15:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:22.463 { 00:07:22.463 "subsystems": [ 00:07:22.463 { 00:07:22.463 "subsystem": "bdev", 00:07:22.463 "config": [ 00:07:22.463 { 00:07:22.463 "params": { 00:07:22.463 "block_size": 512, 00:07:22.463 "num_blocks": 512, 00:07:22.463 "name": "malloc0" 00:07:22.463 }, 00:07:22.463 "method": "bdev_malloc_create" 00:07:22.463 }, 00:07:22.463 { 00:07:22.463 "params": { 00:07:22.463 "block_size": 512, 00:07:22.463 "num_blocks": 512, 00:07:22.463 "name": "malloc1" 00:07:22.463 }, 00:07:22.463 "method": "bdev_malloc_create" 00:07:22.463 }, 00:07:22.463 { 00:07:22.463 "method": "bdev_wait_for_examine" 00:07:22.463 } 00:07:22.463 ] 00:07:22.463 } 00:07:22.463 ] 00:07:22.463 } 00:07:22.463 [2024-11-26 19:15:20.720743] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:22.463 [2024-11-26 19:15:20.720851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:07:22.463 [2024-11-26 19:15:20.870726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.722 [2024-11-26 19:15:20.928837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.722 [2024-11-26 19:15:20.986857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.722 [2024-11-26 19:15:21.047966] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:22.722 [2024-11-26 19:15:21.048025] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.983 [2024-11-26 19:15:21.177663] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.983 00:07:22.983 real 0m0.593s 00:07:22.983 user 0m0.394s 00:07:22.983 sys 0m0.158s 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:22.983 ************************************ 00:07:22.983 END TEST dd_invalid_input_count 00:07:22.983 ************************************ 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.983 ************************************ 00:07:22.983 START TEST dd_invalid_output_count 00:07:22.983 ************************************ 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.983 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:22.983 [2024-11-26 19:15:21.355621] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:22.983 [2024-11-26 19:15:21.355714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61840 ] 00:07:22.983 { 00:07:22.983 "subsystems": [ 00:07:22.983 { 00:07:22.983 "subsystem": "bdev", 00:07:22.983 "config": [ 00:07:22.983 { 00:07:22.983 "params": { 00:07:22.983 "block_size": 512, 00:07:22.983 "num_blocks": 512, 00:07:22.983 "name": "malloc0" 00:07:22.983 }, 00:07:22.983 "method": "bdev_malloc_create" 00:07:22.983 }, 00:07:22.983 { 00:07:22.983 "method": "bdev_wait_for_examine" 00:07:22.983 } 00:07:22.983 ] 00:07:22.983 } 00:07:22.983 ] 00:07:22.983 } 00:07:23.242 [2024-11-26 19:15:21.497699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.242 [2024-11-26 19:15:21.551367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.242 [2024-11-26 19:15:21.611234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.243 [2024-11-26 19:15:21.668033] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:23.243 [2024-11-26 19:15:21.668147] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.502 [2024-11-26 19:15:21.794740] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.502 00:07:23.502 real 0m0.551s 00:07:23.502 user 0m0.341s 00:07:23.502 sys 0m0.162s 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:23.502 ************************************ 00:07:23.502 END TEST dd_invalid_output_count 00:07:23.502 ************************************ 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.502 ************************************ 00:07:23.502 START TEST dd_bs_not_multiple 00:07:23.502 ************************************ 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.502 19:15:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:23.762 [2024-11-26 19:15:21.965307] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:23.762 [2024-11-26 19:15:21.965403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61872 ] 00:07:23.762 { 00:07:23.762 "subsystems": [ 00:07:23.762 { 00:07:23.762 "subsystem": "bdev", 00:07:23.762 "config": [ 00:07:23.762 { 00:07:23.762 "params": { 00:07:23.762 "block_size": 512, 00:07:23.762 "num_blocks": 512, 00:07:23.762 "name": "malloc0" 00:07:23.762 }, 00:07:23.762 "method": "bdev_malloc_create" 00:07:23.762 }, 00:07:23.762 { 00:07:23.762 "params": { 00:07:23.762 "block_size": 512, 00:07:23.762 "num_blocks": 512, 00:07:23.762 "name": "malloc1" 00:07:23.762 }, 00:07:23.762 "method": "bdev_malloc_create" 00:07:23.762 }, 00:07:23.762 { 00:07:23.762 "method": "bdev_wait_for_examine" 00:07:23.762 } 00:07:23.762 ] 00:07:23.762 } 00:07:23.762 ] 00:07:23.762 } 00:07:23.762 [2024-11-26 19:15:22.106309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.762 [2024-11-26 19:15:22.159276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.021 [2024-11-26 19:15:22.217848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.021 [2024-11-26 19:15:22.280050] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:24.021 [2024-11-26 19:15:22.280152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.021 [2024-11-26 19:15:22.401828] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.021 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.280 00:07:24.280 real 0m0.553s 00:07:24.280 user 0m0.345s 00:07:24.280 sys 0m0.171s 00:07:24.280 ************************************ 00:07:24.280 END TEST dd_bs_not_multiple 00:07:24.280 ************************************ 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 00:07:24.280 real 0m6.733s 00:07:24.280 user 0m3.574s 00:07:24.280 sys 0m2.552s 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.280 19:15:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 ************************************ 00:07:24.280 END TEST spdk_dd_negative 00:07:24.280 ************************************ 00:07:24.280 00:07:24.280 real 1m16.118s 00:07:24.280 user 0m47.854s 00:07:24.280 sys 0m34.374s 00:07:24.280 19:15:22 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.280 19:15:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 ************************************ 00:07:24.280 END TEST spdk_dd 00:07:24.280 ************************************ 00:07:24.280 19:15:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:24.280 19:15:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.280 19:15:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 19:15:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:24.280 19:15:22 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:24.280 19:15:22 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.280 19:15:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.280 19:15:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.280 19:15:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 ************************************ 00:07:24.280 START TEST nvmf_tcp 00:07:24.280 ************************************ 00:07:24.280 19:15:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.280 * Looking for test storage... 00:07:24.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.540 19:15:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.540 --rc genhtml_branch_coverage=1 00:07:24.540 --rc genhtml_function_coverage=1 00:07:24.540 --rc genhtml_legend=1 00:07:24.540 --rc geninfo_all_blocks=1 00:07:24.540 --rc geninfo_unexecuted_blocks=1 00:07:24.540 00:07:24.540 ' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.540 --rc genhtml_branch_coverage=1 00:07:24.540 --rc genhtml_function_coverage=1 00:07:24.540 --rc genhtml_legend=1 00:07:24.540 --rc geninfo_all_blocks=1 00:07:24.540 --rc geninfo_unexecuted_blocks=1 00:07:24.540 00:07:24.540 ' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.540 --rc genhtml_branch_coverage=1 00:07:24.540 --rc genhtml_function_coverage=1 00:07:24.540 --rc genhtml_legend=1 00:07:24.540 --rc geninfo_all_blocks=1 00:07:24.540 --rc geninfo_unexecuted_blocks=1 00:07:24.540 00:07:24.540 ' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.540 --rc genhtml_branch_coverage=1 00:07:24.540 --rc genhtml_function_coverage=1 00:07:24.540 --rc genhtml_legend=1 00:07:24.540 --rc geninfo_all_blocks=1 00:07:24.540 --rc geninfo_unexecuted_blocks=1 00:07:24.540 00:07:24.540 ' 00:07:24.540 19:15:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.540 19:15:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.540 19:15:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.540 19:15:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.540 ************************************ 00:07:24.540 START TEST nvmf_target_core 00:07:24.540 ************************************ 00:07:24.540 19:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:24.540 * Looking for test storage... 00:07:24.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:24.540 19:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.540 19:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.540 19:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:24.800 19:15:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.800 --rc genhtml_branch_coverage=1 00:07:24.800 --rc genhtml_function_coverage=1 00:07:24.800 --rc genhtml_legend=1 00:07:24.800 --rc geninfo_all_blocks=1 00:07:24.800 --rc geninfo_unexecuted_blocks=1 00:07:24.800 00:07:24.800 ' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.800 --rc genhtml_branch_coverage=1 00:07:24.800 --rc genhtml_function_coverage=1 00:07:24.800 --rc genhtml_legend=1 00:07:24.800 --rc geninfo_all_blocks=1 00:07:24.800 --rc geninfo_unexecuted_blocks=1 00:07:24.800 00:07:24.800 ' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.800 --rc genhtml_branch_coverage=1 00:07:24.800 --rc genhtml_function_coverage=1 00:07:24.800 --rc genhtml_legend=1 00:07:24.800 --rc geninfo_all_blocks=1 00:07:24.800 --rc geninfo_unexecuted_blocks=1 00:07:24.800 00:07:24.800 ' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.800 --rc genhtml_branch_coverage=1 00:07:24.800 --rc genhtml_function_coverage=1 00:07:24.800 --rc genhtml_legend=1 00:07:24.800 --rc geninfo_all_blocks=1 00:07:24.800 --rc geninfo_unexecuted_blocks=1 00:07:24.800 00:07:24.800 ' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.800 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.801 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.801 ************************************ 00:07:24.801 START TEST nvmf_host_management 00:07:24.801 ************************************ 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.801 * Looking for test storage... 00:07:24.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.801 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.061 --rc genhtml_branch_coverage=1 00:07:25.061 --rc genhtml_function_coverage=1 00:07:25.061 --rc genhtml_legend=1 00:07:25.061 --rc geninfo_all_blocks=1 00:07:25.061 --rc geninfo_unexecuted_blocks=1 00:07:25.061 00:07:25.061 ' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.061 --rc genhtml_branch_coverage=1 00:07:25.061 --rc genhtml_function_coverage=1 00:07:25.061 --rc genhtml_legend=1 00:07:25.061 --rc geninfo_all_blocks=1 00:07:25.061 --rc geninfo_unexecuted_blocks=1 00:07:25.061 00:07:25.061 ' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.061 --rc genhtml_branch_coverage=1 00:07:25.061 --rc genhtml_function_coverage=1 00:07:25.061 --rc genhtml_legend=1 00:07:25.061 --rc geninfo_all_blocks=1 00:07:25.061 --rc geninfo_unexecuted_blocks=1 00:07:25.061 00:07:25.061 ' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.061 --rc genhtml_branch_coverage=1 00:07:25.061 --rc genhtml_function_coverage=1 00:07:25.061 --rc genhtml_legend=1 00:07:25.061 --rc geninfo_all_blocks=1 00:07:25.061 --rc geninfo_unexecuted_blocks=1 00:07:25.061 00:07:25.061 ' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:25.061 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:25.062 Cannot find device "nvmf_init_br" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:25.062 Cannot find device "nvmf_init_br2" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:25.062 Cannot find device "nvmf_tgt_br" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.062 Cannot find device "nvmf_tgt_br2" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:25.062 Cannot find device "nvmf_init_br" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:25.062 Cannot find device "nvmf_init_br2" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:25.062 Cannot find device "nvmf_tgt_br" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:25.062 Cannot find device "nvmf_tgt_br2" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:25.062 Cannot find device "nvmf_br" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:25.062 Cannot find device "nvmf_init_if" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:25.062 Cannot find device "nvmf_init_if2" 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:25.062 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:25.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:07:25.322 00:07:25.322 --- 10.0.0.3 ping statistics --- 00:07:25.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.322 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:25.322 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:25.322 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:07:25.322 00:07:25.322 --- 10.0.0.4 ping statistics --- 00:07:25.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.322 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:25.322 00:07:25.322 --- 10.0.0.1 ping statistics --- 00:07:25.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.322 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:25.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:07:25.322 00:07:25.322 --- 10.0.0.2 ping statistics --- 00:07:25.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.322 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.322 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62215 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62215 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62215 ']' 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.648 19:15:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.648 [2024-11-26 19:15:23.842090] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:25.648 [2024-11-26 19:15:23.842183] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.648 [2024-11-26 19:15:24.001197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.923 [2024-11-26 19:15:24.070646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.923 [2024-11-26 19:15:24.070731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.923 [2024-11-26 19:15:24.070752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.923 [2024-11-26 19:15:24.070762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.923 [2024-11-26 19:15:24.070771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.923 [2024-11-26 19:15:24.072208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.923 [2024-11-26 19:15:24.072294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.923 [2024-11-26 19:15:24.072377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.923 [2024-11-26 19:15:24.072376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.923 [2024-11-26 19:15:24.132715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.923 [2024-11-26 19:15:24.245722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.923 Malloc0 00:07:25.923 [2024-11-26 19:15:24.320225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.923 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62261 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62261 /var/tmp/bdevperf.sock 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62261 ']' 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.182 { 00:07:26.182 "params": { 00:07:26.182 "name": "Nvme$subsystem", 00:07:26.182 "trtype": "$TEST_TRANSPORT", 00:07:26.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.182 "adrfam": "ipv4", 00:07:26.182 "trsvcid": "$NVMF_PORT", 00:07:26.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.182 "hdgst": ${hdgst:-false}, 00:07:26.182 "ddgst": ${ddgst:-false} 00:07:26.182 }, 00:07:26.182 "method": "bdev_nvme_attach_controller" 00:07:26.182 } 00:07:26.182 EOF 00:07:26.182 )") 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:26.182 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.182 "params": { 00:07:26.182 "name": "Nvme0", 00:07:26.182 "trtype": "tcp", 00:07:26.182 "traddr": "10.0.0.3", 00:07:26.182 "adrfam": "ipv4", 00:07:26.182 "trsvcid": "4420", 00:07:26.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.182 "hdgst": false, 00:07:26.182 "ddgst": false 00:07:26.182 }, 00:07:26.182 "method": "bdev_nvme_attach_controller" 00:07:26.182 }' 00:07:26.182 [2024-11-26 19:15:24.434274] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:26.182 [2024-11-26 19:15:24.434401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62261 ] 00:07:26.441 [2024-11-26 19:15:24.650040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.441 [2024-11-26 19:15:24.724440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.441 [2024-11-26 19:15:24.793293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.701 Running I/O for 10 seconds... 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.271 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.272 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.272 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.272 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.272 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.272 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:27.272 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:27.272 00:07:27.272 Latency(us) 00:07:27.272 [2024-11-26T19:15:25.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.272 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.272 Job: Nvme0n1 ended in about 0.60 seconds with error 00:07:27.272 Verification LBA range: start 0x0 length 0x400 00:07:27.272 Nvme0n1 : 0.60 1481.76 92.61 105.84 0.00 39243.05 2234.18 36938.47 00:07:27.272 [2024-11-26T19:15:25.712Z] =================================================================================================================== 00:07:27.272 [2024-11-26T19:15:25.712Z] Total : 1481.76 92.61 105.84 0.00 39243.05 2234.18 36938.47 00:07:27.272 [2024-11-26 19:15:25.517114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.272 [2024-11-26 19:15:25.517928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.272 [2024-11-26 19:15:25.517938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.517949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.517981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.273 [2024-11-26 19:15:25.518577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e22d0 is same with the state(6) to be set 00:07:27.273 [2024-11-26 19:15:25.518789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.273 [2024-11-26 19:15:25.518808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.273 [2024-11-26 19:15:25.518828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.273 [2024-11-26 19:15:25.518846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.273 [2024-11-26 19:15:25.518864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.273 [2024-11-26 19:15:25.518873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7ce0 is same with the state(6) to be set 00:07:27.273 [2024-11-26 19:15:25.520005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:27.273 [2024-11-26 19:15:25.521919] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.273 [2024-11-26 19:15:25.521940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e7ce0 (9): Bad file descriptor 00:07:27.273 [2024-11-26 19:15:25.532625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62261 00:07:28.213 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62261) - No such process 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.213 { 00:07:28.213 "params": { 00:07:28.213 "name": "Nvme$subsystem", 00:07:28.213 "trtype": "$TEST_TRANSPORT", 00:07:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.213 "adrfam": "ipv4", 00:07:28.213 "trsvcid": "$NVMF_PORT", 00:07:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.213 "hdgst": ${hdgst:-false}, 00:07:28.213 "ddgst": ${ddgst:-false} 00:07:28.213 }, 00:07:28.213 "method": "bdev_nvme_attach_controller" 00:07:28.213 } 00:07:28.213 EOF 00:07:28.213 )") 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:28.213 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.213 "params": { 00:07:28.213 "name": "Nvme0", 00:07:28.213 "trtype": "tcp", 00:07:28.213 "traddr": "10.0.0.3", 00:07:28.213 "adrfam": "ipv4", 00:07:28.213 "trsvcid": "4420", 00:07:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:28.213 "hdgst": false, 00:07:28.213 "ddgst": false 00:07:28.213 }, 00:07:28.213 "method": "bdev_nvme_attach_controller" 00:07:28.213 }' 00:07:28.213 [2024-11-26 19:15:26.566701] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:28.214 [2024-11-26 19:15:26.566818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:07:28.473 [2024-11-26 19:15:26.710694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.473 [2024-11-26 19:15:26.769867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.473 [2024-11-26 19:15:26.834520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.732 Running I/O for 1 seconds... 00:07:29.669 1536.00 IOPS, 96.00 MiB/s 00:07:29.669 Latency(us) 00:07:29.669 [2024-11-26T19:15:28.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.669 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.669 Verification LBA range: start 0x0 length 0x400 00:07:29.669 Nvme0n1 : 1.01 1590.48 99.40 0.00 0.00 39465.68 4140.68 36461.85 00:07:29.669 [2024-11-26T19:15:28.109Z] =================================================================================================================== 00:07:29.669 [2024-11-26T19:15:28.109Z] Total : 1590.48 99.40 0.00 0.00 39465.68 4140.68 36461.85 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.929 rmmod nvme_tcp 00:07:29.929 rmmod nvme_fabrics 00:07:29.929 rmmod nvme_keyring 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62215 ']' 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62215 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62215 ']' 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62215 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62215 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62215' 00:07:29.929 killing process with pid 62215 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62215 00:07:29.929 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62215 00:07:30.188 [2024-11-26 19:15:28.532795] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:30.188 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:30.447 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:30.448 00:07:30.448 real 0m5.773s 00:07:30.448 user 0m20.627s 00:07:30.448 sys 0m1.587s 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.448 ************************************ 00:07:30.448 END TEST nvmf_host_management 00:07:30.448 ************************************ 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.448 ************************************ 00:07:30.448 START TEST nvmf_lvol 00:07:30.448 ************************************ 00:07:30.448 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.707 * Looking for test storage... 00:07:30.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.708 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.708 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.708 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.708 --rc genhtml_branch_coverage=1 00:07:30.708 --rc genhtml_function_coverage=1 00:07:30.708 --rc genhtml_legend=1 00:07:30.708 --rc geninfo_all_blocks=1 00:07:30.708 --rc geninfo_unexecuted_blocks=1 00:07:30.708 00:07:30.708 ' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.708 --rc genhtml_branch_coverage=1 00:07:30.708 --rc genhtml_function_coverage=1 00:07:30.708 --rc genhtml_legend=1 00:07:30.708 --rc geninfo_all_blocks=1 00:07:30.708 --rc geninfo_unexecuted_blocks=1 00:07:30.708 00:07:30.708 ' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.708 --rc genhtml_branch_coverage=1 00:07:30.708 --rc genhtml_function_coverage=1 00:07:30.708 --rc genhtml_legend=1 00:07:30.708 --rc geninfo_all_blocks=1 00:07:30.708 --rc geninfo_unexecuted_blocks=1 00:07:30.708 00:07:30.708 ' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.708 --rc genhtml_branch_coverage=1 00:07:30.708 --rc genhtml_function_coverage=1 00:07:30.708 --rc genhtml_legend=1 00:07:30.708 --rc geninfo_all_blocks=1 00:07:30.708 --rc geninfo_unexecuted_blocks=1 00:07:30.708 00:07:30.708 ' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.708 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:30.709 Cannot find device "nvmf_init_br" 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:30.709 Cannot find device "nvmf_init_br2" 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:30.709 Cannot find device "nvmf_tgt_br" 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.709 Cannot find device "nvmf_tgt_br2" 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:30.709 Cannot find device "nvmf_init_br" 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:30.709 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:30.966 Cannot find device "nvmf_init_br2" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:30.966 Cannot find device "nvmf_tgt_br" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:30.966 Cannot find device "nvmf_tgt_br2" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:30.966 Cannot find device "nvmf_br" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:30.966 Cannot find device "nvmf_init_if" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:30.966 Cannot find device "nvmf_init_if2" 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.966 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:30.967 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:31.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:31.225 00:07:31.225 --- 10.0.0.3 ping statistics --- 00:07:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.225 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:31.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:31.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:07:31.225 00:07:31.225 --- 10.0.0.4 ping statistics --- 00:07:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.225 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:31.225 00:07:31.225 --- 10.0.0.1 ping statistics --- 00:07:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.225 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:31.225 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:31.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:31.225 00:07:31.225 --- 10.0.0.2 ping statistics --- 00:07:31.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.225 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62563 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62563 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62563 ']' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.226 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 [2024-11-26 19:15:29.539142] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:31.226 [2024-11-26 19:15:29.539221] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.486 [2024-11-26 19:15:29.689264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.486 [2024-11-26 19:15:29.753633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.486 [2024-11-26 19:15:29.753699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.486 [2024-11-26 19:15:29.753712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.486 [2024-11-26 19:15:29.753722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.486 [2024-11-26 19:15:29.753731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.486 [2024-11-26 19:15:29.754990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.486 [2024-11-26 19:15:29.755126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.486 [2024-11-26 19:15:29.755133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.486 [2024-11-26 19:15:29.814624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.422 [2024-11-26 19:15:30.786014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.422 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.990 19:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:32.990 19:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:33.249 19:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:33.249 19:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:33.508 19:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:33.767 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c16cafd6-5d8d-4fe2-84ff-ddf93d19c10f 00:07:33.767 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c16cafd6-5d8d-4fe2-84ff-ddf93d19c10f lvol 20 00:07:34.026 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=17950020-a348-478a-9161-0cef235ef4fe 00:07:34.026 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.284 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17950020-a348-478a-9161-0cef235ef4fe 00:07:34.543 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:34.800 [2024-11-26 19:15:33.117594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:34.800 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:35.059 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62643 00:07:35.059 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:35.059 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:35.993 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 17950020-a348-478a-9161-0cef235ef4fe MY_SNAPSHOT 00:07:36.560 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cca67a4e-5799-4d4e-8aa3-a476ad615e8e 00:07:36.560 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 17950020-a348-478a-9161-0cef235ef4fe 30 00:07:36.817 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cca67a4e-5799-4d4e-8aa3-a476ad615e8e MY_CLONE 00:07:37.076 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e65df9da-0bbe-47d7-bf38-7c329a4f9a62 00:07:37.076 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e65df9da-0bbe-47d7-bf38-7c329a4f9a62 00:07:37.334 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62643 00:07:45.448 Initializing NVMe Controllers 00:07:45.448 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:45.448 Controller IO queue size 128, less than required. 00:07:45.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.448 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:45.448 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:45.448 Initialization complete. Launching workers. 00:07:45.448 ======================================================== 00:07:45.448 Latency(us) 00:07:45.448 Device Information : IOPS MiB/s Average min max 00:07:45.448 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9211.80 35.98 13907.72 2582.47 57874.56 00:07:45.448 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9613.00 37.55 13330.80 1184.25 65256.57 00:07:45.448 ======================================================== 00:07:45.448 Total : 18824.80 73.53 13613.11 1184.25 65256.57 00:07:45.448 00:07:45.448 19:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:45.707 19:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 17950020-a348-478a-9161-0cef235ef4fe 00:07:45.965 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c16cafd6-5d8d-4fe2-84ff-ddf93d19c10f 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.224 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.224 rmmod nvme_tcp 00:07:46.224 rmmod nvme_fabrics 00:07:46.483 rmmod nvme_keyring 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62563 ']' 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62563 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62563 ']' 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62563 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62563 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62563' 00:07:46.483 killing process with pid 62563 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62563 00:07:46.483 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62563 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:46.743 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:47.003 00:07:47.003 real 0m16.394s 00:07:47.003 user 1m7.049s 00:07:47.003 sys 0m4.303s 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.003 ************************************ 00:07:47.003 END TEST nvmf_lvol 00:07:47.003 ************************************ 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.003 ************************************ 00:07:47.003 START TEST nvmf_lvs_grow 00:07:47.003 ************************************ 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.003 * Looking for test storage... 00:07:47.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.003 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.263 --rc genhtml_branch_coverage=1 00:07:47.263 --rc genhtml_function_coverage=1 00:07:47.263 --rc genhtml_legend=1 00:07:47.263 --rc geninfo_all_blocks=1 00:07:47.263 --rc geninfo_unexecuted_blocks=1 00:07:47.263 00:07:47.263 ' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.263 --rc genhtml_branch_coverage=1 00:07:47.263 --rc genhtml_function_coverage=1 00:07:47.263 --rc genhtml_legend=1 00:07:47.263 --rc geninfo_all_blocks=1 00:07:47.263 --rc geninfo_unexecuted_blocks=1 00:07:47.263 00:07:47.263 ' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.263 --rc genhtml_branch_coverage=1 00:07:47.263 --rc genhtml_function_coverage=1 00:07:47.263 --rc genhtml_legend=1 00:07:47.263 --rc geninfo_all_blocks=1 00:07:47.263 --rc geninfo_unexecuted_blocks=1 00:07:47.263 00:07:47.263 ' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.263 --rc genhtml_branch_coverage=1 00:07:47.263 --rc genhtml_function_coverage=1 00:07:47.263 --rc genhtml_legend=1 00:07:47.263 --rc geninfo_all_blocks=1 00:07:47.263 --rc geninfo_unexecuted_blocks=1 00:07:47.263 00:07:47.263 ' 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.263 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:47.264 Cannot find device "nvmf_init_br" 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:47.264 Cannot find device "nvmf_init_br2" 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:47.264 Cannot find device "nvmf_tgt_br" 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:47.264 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.264 Cannot find device "nvmf_tgt_br2" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:47.265 Cannot find device "nvmf_init_br" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:47.265 Cannot find device "nvmf_init_br2" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:47.265 Cannot find device "nvmf_tgt_br" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:47.265 Cannot find device "nvmf_tgt_br2" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:47.265 Cannot find device "nvmf_br" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:47.265 Cannot find device "nvmf_init_if" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:47.265 Cannot find device "nvmf_init_if2" 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.265 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:47.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:47.524 00:07:47.524 --- 10.0.0.3 ping statistics --- 00:07:47.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.524 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:47.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:47.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:07:47.524 00:07:47.524 --- 10.0.0.4 ping statistics --- 00:07:47.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.524 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:47.524 00:07:47.524 --- 10.0.0.1 ping statistics --- 00:07:47.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.524 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:47.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:07:47.524 00:07:47.524 --- 10.0.0.2 ping statistics --- 00:07:47.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.524 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63020 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63020 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63020 ']' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.524 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 [2024-11-26 19:15:46.002635] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:47.784 [2024-11-26 19:15:46.002751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.784 [2024-11-26 19:15:46.151297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.784 [2024-11-26 19:15:46.207079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.784 [2024-11-26 19:15:46.207142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.784 [2024-11-26 19:15:46.207167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.784 [2024-11-26 19:15:46.207175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.784 [2024-11-26 19:15:46.207181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.784 [2024-11-26 19:15:46.207564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.042 [2024-11-26 19:15:46.264592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.610 19:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.611 19:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:48.611 19:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.611 19:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.611 19:15:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.611 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.611 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.870 [2024-11-26 19:15:47.251417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.870 ************************************ 00:07:48.870 START TEST lvs_grow_clean 00:07:48.870 ************************************ 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.870 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.439 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.439 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:49.698 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1570dc3c-0df7-498b-bc53-55c7960fda27 00:07:49.698 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:07:49.698 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:49.957 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:49.957 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:49.957 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1570dc3c-0df7-498b-bc53-55c7960fda27 lvol 150 00:07:50.216 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=54bade9a-c77b-47c6-b0b1-38a5d0b60d26 00:07:50.216 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.216 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.475 [2024-11-26 19:15:48.748854] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.475 [2024-11-26 19:15:48.748980] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.475 true 00:07:50.475 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:07:50.475 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:50.735 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:50.735 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:50.995 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54bade9a-c77b-47c6-b0b1-38a5d0b60d26 00:07:51.262 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:51.546 [2024-11-26 19:15:49.737937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:51.546 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63108 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63108 /var/tmp/bdevperf.sock 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63108 ']' 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.805 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:51.805 [2024-11-26 19:15:50.099248] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:07:51.805 [2024-11-26 19:15:50.099361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:07:52.064 [2024-11-26 19:15:50.248508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.064 [2024-11-26 19:15:50.307317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.064 [2024-11-26 19:15:50.367195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.064 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.064 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:52.064 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:52.632 Nvme0n1 00:07:52.632 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:52.632 [ 00:07:52.632 { 00:07:52.632 "name": "Nvme0n1", 00:07:52.632 "aliases": [ 00:07:52.632 "54bade9a-c77b-47c6-b0b1-38a5d0b60d26" 00:07:52.632 ], 00:07:52.632 "product_name": "NVMe disk", 00:07:52.632 "block_size": 4096, 00:07:52.632 "num_blocks": 38912, 00:07:52.632 "uuid": "54bade9a-c77b-47c6-b0b1-38a5d0b60d26", 00:07:52.632 "numa_id": -1, 00:07:52.632 "assigned_rate_limits": { 00:07:52.632 "rw_ios_per_sec": 0, 00:07:52.632 "rw_mbytes_per_sec": 0, 00:07:52.632 "r_mbytes_per_sec": 0, 00:07:52.632 "w_mbytes_per_sec": 0 00:07:52.632 }, 00:07:52.632 "claimed": false, 00:07:52.632 "zoned": false, 00:07:52.632 "supported_io_types": { 00:07:52.632 "read": true, 00:07:52.632 "write": true, 00:07:52.632 "unmap": true, 00:07:52.632 "flush": true, 00:07:52.632 "reset": true, 00:07:52.632 "nvme_admin": true, 00:07:52.632 "nvme_io": true, 00:07:52.632 "nvme_io_md": false, 00:07:52.632 "write_zeroes": true, 00:07:52.632 "zcopy": false, 00:07:52.632 "get_zone_info": false, 00:07:52.632 "zone_management": false, 00:07:52.632 "zone_append": false, 00:07:52.632 "compare": true, 00:07:52.632 "compare_and_write": true, 00:07:52.632 "abort": true, 00:07:52.632 "seek_hole": false, 00:07:52.632 "seek_data": false, 00:07:52.632 "copy": true, 00:07:52.632 "nvme_iov_md": false 00:07:52.632 }, 00:07:52.632 "memory_domains": [ 00:07:52.632 { 00:07:52.632 "dma_device_id": "system", 00:07:52.632 "dma_device_type": 1 00:07:52.632 } 00:07:52.632 ], 00:07:52.632 "driver_specific": { 00:07:52.632 "nvme": [ 00:07:52.632 { 00:07:52.632 "trid": { 00:07:52.632 "trtype": "TCP", 00:07:52.632 "adrfam": "IPv4", 00:07:52.632 "traddr": "10.0.0.3", 00:07:52.632 "trsvcid": "4420", 00:07:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:52.632 }, 00:07:52.632 "ctrlr_data": { 00:07:52.632 "cntlid": 1, 00:07:52.632 "vendor_id": "0x8086", 00:07:52.632 "model_number": "SPDK bdev Controller", 00:07:52.632 "serial_number": "SPDK0", 00:07:52.632 "firmware_revision": "25.01", 00:07:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.632 "oacs": { 00:07:52.632 "security": 0, 00:07:52.632 "format": 0, 00:07:52.632 "firmware": 0, 00:07:52.632 "ns_manage": 0 00:07:52.632 }, 00:07:52.632 "multi_ctrlr": true, 00:07:52.632 "ana_reporting": false 00:07:52.632 }, 00:07:52.632 "vs": { 00:07:52.632 "nvme_version": "1.3" 00:07:52.632 }, 00:07:52.632 "ns_data": { 00:07:52.632 "id": 1, 00:07:52.632 "can_share": true 00:07:52.632 } 00:07:52.632 } 00:07:52.632 ], 00:07:52.632 "mp_policy": "active_passive" 00:07:52.632 } 00:07:52.632 } 00:07:52.632 ] 00:07:52.632 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:52.632 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63124 00:07:52.632 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:52.892 Running I/O for 10 seconds... 00:07:53.830 Latency(us) 00:07:53.830 [2024-11-26T19:15:52.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.830 Nvme0n1 : 1.00 6719.00 26.25 0.00 0.00 0.00 0.00 0.00 00:07:53.830 [2024-11-26T19:15:52.270Z] =================================================================================================================== 00:07:53.830 [2024-11-26T19:15:52.270Z] Total : 6719.00 26.25 0.00 0.00 0.00 0.00 0.00 00:07:53.830 00:07:54.767 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:07:54.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.767 Nvme0n1 : 2.00 6661.50 26.02 0.00 0.00 0.00 0.00 0.00 00:07:54.767 [2024-11-26T19:15:53.207Z] =================================================================================================================== 00:07:54.767 [2024-11-26T19:15:53.207Z] Total : 6661.50 26.02 0.00 0.00 0.00 0.00 0.00 00:07:54.767 00:07:55.026 true 00:07:55.026 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:07:55.026 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.285 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.285 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.285 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63124 00:07:55.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.854 Nvme0n1 : 3.00 6600.00 25.78 0.00 0.00 0.00 0.00 0.00 00:07:55.854 [2024-11-26T19:15:54.294Z] =================================================================================================================== 00:07:55.854 [2024-11-26T19:15:54.294Z] Total : 6600.00 25.78 0.00 0.00 0.00 0.00 0.00 00:07:55.854 00:07:56.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.792 Nvme0n1 : 4.00 6632.75 25.91 0.00 0.00 0.00 0.00 0.00 00:07:56.792 [2024-11-26T19:15:55.232Z] =================================================================================================================== 00:07:56.792 [2024-11-26T19:15:55.232Z] Total : 6632.75 25.91 0.00 0.00 0.00 0.00 0.00 00:07:56.792 00:07:57.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.730 Nvme0n1 : 5.00 6627.00 25.89 0.00 0.00 0.00 0.00 0.00 00:07:57.730 [2024-11-26T19:15:56.170Z] =================================================================================================================== 00:07:57.730 [2024-11-26T19:15:56.170Z] Total : 6627.00 25.89 0.00 0.00 0.00 0.00 0.00 00:07:57.730 00:07:59.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.123 Nvme0n1 : 6.00 6391.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:59.123 [2024-11-26T19:15:57.563Z] =================================================================================================================== 00:07:59.123 [2024-11-26T19:15:57.563Z] Total : 6391.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:59.123 00:07:59.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.737 Nvme0n1 : 7.00 6439.86 25.16 0.00 0.00 0.00 0.00 0.00 00:07:59.737 [2024-11-26T19:15:58.177Z] =================================================================================================================== 00:07:59.737 [2024-11-26T19:15:58.177Z] Total : 6439.86 25.16 0.00 0.00 0.00 0.00 0.00 00:07:59.737 00:08:01.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.114 Nvme0n1 : 8.00 6460.38 25.24 0.00 0.00 0.00 0.00 0.00 00:08:01.114 [2024-11-26T19:15:59.555Z] =================================================================================================================== 00:08:01.115 [2024-11-26T19:15:59.555Z] Total : 6460.38 25.24 0.00 0.00 0.00 0.00 0.00 00:08:01.115 00:08:02.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.057 Nvme0n1 : 9.00 6490.44 25.35 0.00 0.00 0.00 0.00 0.00 00:08:02.057 [2024-11-26T19:16:00.497Z] =================================================================================================================== 00:08:02.057 [2024-11-26T19:16:00.497Z] Total : 6490.44 25.35 0.00 0.00 0.00 0.00 0.00 00:08:02.057 00:08:02.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.992 Nvme0n1 : 10.00 6501.80 25.40 0.00 0.00 0.00 0.00 0.00 00:08:02.992 [2024-11-26T19:16:01.432Z] =================================================================================================================== 00:08:02.992 [2024-11-26T19:16:01.432Z] Total : 6501.80 25.40 0.00 0.00 0.00 0.00 0.00 00:08:02.992 00:08:02.992 00:08:02.992 Latency(us) 00:08:02.992 [2024-11-26T19:16:01.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.992 Nvme0n1 : 10.02 6502.32 25.40 0.00 0.00 19677.51 6166.34 131548.63 00:08:02.992 [2024-11-26T19:16:01.432Z] =================================================================================================================== 00:08:02.992 [2024-11-26T19:16:01.432Z] Total : 6502.32 25.40 0.00 0.00 19677.51 6166.34 131548.63 00:08:02.992 { 00:08:02.992 "results": [ 00:08:02.992 { 00:08:02.992 "job": "Nvme0n1", 00:08:02.992 "core_mask": "0x2", 00:08:02.992 "workload": "randwrite", 00:08:02.992 "status": "finished", 00:08:02.992 "queue_depth": 128, 00:08:02.992 "io_size": 4096, 00:08:02.992 "runtime": 10.018882, 00:08:02.992 "iops": 6502.322315004808, 00:08:02.993 "mibps": 25.39969654298753, 00:08:02.993 "io_failed": 0, 00:08:02.993 "io_timeout": 0, 00:08:02.993 "avg_latency_us": 19677.50829348345, 00:08:02.993 "min_latency_us": 6166.341818181818, 00:08:02.993 "max_latency_us": 131548.62545454546 00:08:02.993 } 00:08:02.993 ], 00:08:02.993 "core_count": 1 00:08:02.993 } 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63108 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63108 ']' 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63108 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63108 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:02.993 killing process with pid 63108 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63108' 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63108 00:08:02.993 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.993 00:08:02.993 Latency(us) 00:08:02.993 [2024-11-26T19:16:01.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.993 [2024-11-26T19:16:01.433Z] =================================================================================================================== 00:08:02.993 [2024-11-26T19:16:01.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.993 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63108 00:08:03.252 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:03.511 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.769 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:03.769 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:04.027 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.027 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:04.027 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.286 [2024-11-26 19:16:02.555614] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.286 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.287 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.287 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:04.287 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:04.545 request: 00:08:04.545 { 00:08:04.545 "uuid": "1570dc3c-0df7-498b-bc53-55c7960fda27", 00:08:04.545 "method": "bdev_lvol_get_lvstores", 00:08:04.545 "req_id": 1 00:08:04.545 } 00:08:04.545 Got JSON-RPC error response 00:08:04.545 response: 00:08:04.545 { 00:08:04.545 "code": -19, 00:08:04.545 "message": "No such device" 00:08:04.545 } 00:08:04.545 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:04.545 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.545 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.545 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.545 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.804 aio_bdev 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 54bade9a-c77b-47c6-b0b1-38a5d0b60d26 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=54bade9a-c77b-47c6-b0b1-38a5d0b60d26 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.804 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.371 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54bade9a-c77b-47c6-b0b1-38a5d0b60d26 -t 2000 00:08:05.371 [ 00:08:05.371 { 00:08:05.371 "name": "54bade9a-c77b-47c6-b0b1-38a5d0b60d26", 00:08:05.371 "aliases": [ 00:08:05.371 "lvs/lvol" 00:08:05.371 ], 00:08:05.371 "product_name": "Logical Volume", 00:08:05.371 "block_size": 4096, 00:08:05.371 "num_blocks": 38912, 00:08:05.371 "uuid": "54bade9a-c77b-47c6-b0b1-38a5d0b60d26", 00:08:05.371 "assigned_rate_limits": { 00:08:05.371 "rw_ios_per_sec": 0, 00:08:05.371 "rw_mbytes_per_sec": 0, 00:08:05.371 "r_mbytes_per_sec": 0, 00:08:05.371 "w_mbytes_per_sec": 0 00:08:05.371 }, 00:08:05.371 "claimed": false, 00:08:05.371 "zoned": false, 00:08:05.371 "supported_io_types": { 00:08:05.371 "read": true, 00:08:05.371 "write": true, 00:08:05.371 "unmap": true, 00:08:05.371 "flush": false, 00:08:05.371 "reset": true, 00:08:05.371 "nvme_admin": false, 00:08:05.371 "nvme_io": false, 00:08:05.371 "nvme_io_md": false, 00:08:05.371 "write_zeroes": true, 00:08:05.371 "zcopy": false, 00:08:05.371 "get_zone_info": false, 00:08:05.371 "zone_management": false, 00:08:05.371 "zone_append": false, 00:08:05.371 "compare": false, 00:08:05.371 "compare_and_write": false, 00:08:05.371 "abort": false, 00:08:05.371 "seek_hole": true, 00:08:05.371 "seek_data": true, 00:08:05.371 "copy": false, 00:08:05.371 "nvme_iov_md": false 00:08:05.371 }, 00:08:05.371 "driver_specific": { 00:08:05.371 "lvol": { 00:08:05.371 "lvol_store_uuid": "1570dc3c-0df7-498b-bc53-55c7960fda27", 00:08:05.371 "base_bdev": "aio_bdev", 00:08:05.371 "thin_provision": false, 00:08:05.371 "num_allocated_clusters": 38, 00:08:05.371 "snapshot": false, 00:08:05.371 "clone": false, 00:08:05.371 "esnap_clone": false 00:08:05.371 } 00:08:05.371 } 00:08:05.371 } 00:08:05.371 ] 00:08:05.371 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:05.371 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:05.371 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:05.938 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:05.938 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:05.938 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:05.938 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:05.938 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 54bade9a-c77b-47c6-b0b1-38a5d0b60d26 00:08:06.197 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1570dc3c-0df7-498b-bc53-55c7960fda27 00:08:06.767 19:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.030 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.288 ************************************ 00:08:07.288 END TEST lvs_grow_clean 00:08:07.288 ************************************ 00:08:07.288 00:08:07.288 real 0m18.395s 00:08:07.288 user 0m17.132s 00:08:07.288 sys 0m2.603s 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.288 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.547 ************************************ 00:08:07.547 START TEST lvs_grow_dirty 00:08:07.547 ************************************ 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.547 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.805 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.805 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.064 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4992378c-2d59-4480-885f-a9a77d208c48 00:08:08.064 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.064 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:08.323 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.323 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.323 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4992378c-2d59-4480-885f-a9a77d208c48 lvol 150 00:08:08.890 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:08.890 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:08.890 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.890 [2024-11-26 19:16:07.326064] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.890 [2024-11-26 19:16:07.327307] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.147 true 00:08:09.147 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:09.147 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.405 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.405 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.664 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:09.922 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:10.181 [2024-11-26 19:16:08.610936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:10.440 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:10.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63388 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63388 /var/tmp/bdevperf.sock 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63388 ']' 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.699 19:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.699 [2024-11-26 19:16:08.952403] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:10.699 [2024-11-26 19:16:08.952773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63388 ] 00:08:10.699 [2024-11-26 19:16:09.105326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.957 [2024-11-26 19:16:09.178286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.957 [2024-11-26 19:16:09.238704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.957 19:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.957 19:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.957 19:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.524 Nvme0n1 00:08:11.524 19:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.783 [ 00:08:11.783 { 00:08:11.783 "name": "Nvme0n1", 00:08:11.783 "aliases": [ 00:08:11.783 "00fc9247-9538-490e-afd2-72f6fc6374a9" 00:08:11.783 ], 00:08:11.783 "product_name": "NVMe disk", 00:08:11.783 "block_size": 4096, 00:08:11.783 "num_blocks": 38912, 00:08:11.783 "uuid": "00fc9247-9538-490e-afd2-72f6fc6374a9", 00:08:11.783 "numa_id": -1, 00:08:11.783 "assigned_rate_limits": { 00:08:11.783 "rw_ios_per_sec": 0, 00:08:11.783 "rw_mbytes_per_sec": 0, 00:08:11.783 "r_mbytes_per_sec": 0, 00:08:11.783 "w_mbytes_per_sec": 0 00:08:11.783 }, 00:08:11.783 "claimed": false, 00:08:11.783 "zoned": false, 00:08:11.783 "supported_io_types": { 00:08:11.783 "read": true, 00:08:11.783 "write": true, 00:08:11.783 "unmap": true, 00:08:11.783 "flush": true, 00:08:11.783 "reset": true, 00:08:11.783 "nvme_admin": true, 00:08:11.783 "nvme_io": true, 00:08:11.783 "nvme_io_md": false, 00:08:11.783 "write_zeroes": true, 00:08:11.783 "zcopy": false, 00:08:11.783 "get_zone_info": false, 00:08:11.783 "zone_management": false, 00:08:11.783 "zone_append": false, 00:08:11.783 "compare": true, 00:08:11.783 "compare_and_write": true, 00:08:11.783 "abort": true, 00:08:11.783 "seek_hole": false, 00:08:11.783 "seek_data": false, 00:08:11.783 "copy": true, 00:08:11.783 "nvme_iov_md": false 00:08:11.783 }, 00:08:11.783 "memory_domains": [ 00:08:11.783 { 00:08:11.783 "dma_device_id": "system", 00:08:11.783 "dma_device_type": 1 00:08:11.783 } 00:08:11.783 ], 00:08:11.783 "driver_specific": { 00:08:11.783 "nvme": [ 00:08:11.783 { 00:08:11.783 "trid": { 00:08:11.783 "trtype": "TCP", 00:08:11.783 "adrfam": "IPv4", 00:08:11.783 "traddr": "10.0.0.3", 00:08:11.783 "trsvcid": "4420", 00:08:11.783 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.783 }, 00:08:11.783 "ctrlr_data": { 00:08:11.783 "cntlid": 1, 00:08:11.783 "vendor_id": "0x8086", 00:08:11.783 "model_number": "SPDK bdev Controller", 00:08:11.783 "serial_number": "SPDK0", 00:08:11.783 "firmware_revision": "25.01", 00:08:11.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.783 "oacs": { 00:08:11.783 "security": 0, 00:08:11.783 "format": 0, 00:08:11.783 "firmware": 0, 00:08:11.783 "ns_manage": 0 00:08:11.783 }, 00:08:11.783 "multi_ctrlr": true, 00:08:11.783 "ana_reporting": false 00:08:11.783 }, 00:08:11.783 "vs": { 00:08:11.783 "nvme_version": "1.3" 00:08:11.783 }, 00:08:11.783 "ns_data": { 00:08:11.783 "id": 1, 00:08:11.783 "can_share": true 00:08:11.783 } 00:08:11.783 } 00:08:11.783 ], 00:08:11.783 "mp_policy": "active_passive" 00:08:11.783 } 00:08:11.783 } 00:08:11.783 ] 00:08:11.783 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63404 00:08:11.783 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.783 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.783 Running I/O for 10 seconds... 00:08:13.159 Latency(us) 00:08:13.159 [2024-11-26T19:16:11.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.159 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:13.159 [2024-11-26T19:16:11.599Z] =================================================================================================================== 00:08:13.159 [2024-11-26T19:16:11.599Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:13.159 00:08:13.727 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:13.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.986 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:13.986 [2024-11-26T19:16:12.426Z] =================================================================================================================== 00:08:13.986 [2024-11-26T19:16:12.426Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:13.986 00:08:13.986 true 00:08:13.986 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.986 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:14.553 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:14.553 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:14.553 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63404 00:08:14.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.812 Nvme0n1 : 3.00 7124.33 27.83 0.00 0.00 0.00 0.00 0.00 00:08:14.812 [2024-11-26T19:16:13.252Z] =================================================================================================================== 00:08:14.812 [2024-11-26T19:16:13.252Z] Total : 7124.33 27.83 0.00 0.00 0.00 0.00 0.00 00:08:14.812 00:08:15.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.748 Nvme0n1 : 4.00 7153.00 27.94 0.00 0.00 0.00 0.00 0.00 00:08:15.748 [2024-11-26T19:16:14.188Z] =================================================================================================================== 00:08:15.748 [2024-11-26T19:16:14.188Z] Total : 7153.00 27.94 0.00 0.00 0.00 0.00 0.00 00:08:15.748 00:08:17.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.150 Nvme0n1 : 5.00 7043.20 27.51 0.00 0.00 0.00 0.00 0.00 00:08:17.150 [2024-11-26T19:16:15.590Z] =================================================================================================================== 00:08:17.150 [2024-11-26T19:16:15.590Z] Total : 7043.20 27.51 0.00 0.00 0.00 0.00 0.00 00:08:17.150 00:08:18.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.085 Nvme0n1 : 6.00 6991.17 27.31 0.00 0.00 0.00 0.00 0.00 00:08:18.085 [2024-11-26T19:16:16.526Z] =================================================================================================================== 00:08:18.086 [2024-11-26T19:16:16.526Z] Total : 6991.17 27.31 0.00 0.00 0.00 0.00 0.00 00:08:18.086 00:08:19.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.021 Nvme0n1 : 7.00 6818.43 26.63 0.00 0.00 0.00 0.00 0.00 00:08:19.021 [2024-11-26T19:16:17.461Z] =================================================================================================================== 00:08:19.021 [2024-11-26T19:16:17.461Z] Total : 6818.43 26.63 0.00 0.00 0.00 0.00 0.00 00:08:19.021 00:08:19.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.958 Nvme0n1 : 8.00 6775.75 26.47 0.00 0.00 0.00 0.00 0.00 00:08:19.958 [2024-11-26T19:16:18.398Z] =================================================================================================================== 00:08:19.958 [2024-11-26T19:16:18.398Z] Total : 6775.75 26.47 0.00 0.00 0.00 0.00 0.00 00:08:19.958 00:08:20.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.897 Nvme0n1 : 9.00 6728.44 26.28 0.00 0.00 0.00 0.00 0.00 00:08:20.897 [2024-11-26T19:16:19.337Z] =================================================================================================================== 00:08:20.897 [2024-11-26T19:16:19.337Z] Total : 6728.44 26.28 0.00 0.00 0.00 0.00 0.00 00:08:20.897 00:08:21.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.855 Nvme0n1 : 10.00 6716.00 26.23 0.00 0.00 0.00 0.00 0.00 00:08:21.855 [2024-11-26T19:16:20.295Z] =================================================================================================================== 00:08:21.855 [2024-11-26T19:16:20.295Z] Total : 6716.00 26.23 0.00 0.00 0.00 0.00 0.00 00:08:21.855 00:08:21.855 00:08:21.855 Latency(us) 00:08:21.855 [2024-11-26T19:16:20.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.855 Nvme0n1 : 10.01 6723.48 26.26 0.00 0.00 19031.10 8519.68 200182.69 00:08:21.855 [2024-11-26T19:16:20.295Z] =================================================================================================================== 00:08:21.855 [2024-11-26T19:16:20.295Z] Total : 6723.48 26.26 0.00 0.00 19031.10 8519.68 200182.69 00:08:21.855 { 00:08:21.855 "results": [ 00:08:21.855 { 00:08:21.855 "job": "Nvme0n1", 00:08:21.855 "core_mask": "0x2", 00:08:21.855 "workload": "randwrite", 00:08:21.855 "status": "finished", 00:08:21.855 "queue_depth": 128, 00:08:21.855 "io_size": 4096, 00:08:21.855 "runtime": 10.007917, 00:08:21.855 "iops": 6723.4770232407, 00:08:21.855 "mibps": 26.263582122033984, 00:08:21.855 "io_failed": 0, 00:08:21.855 "io_timeout": 0, 00:08:21.855 "avg_latency_us": 19031.104359010387, 00:08:21.855 "min_latency_us": 8519.68, 00:08:21.855 "max_latency_us": 200182.69090909092 00:08:21.855 } 00:08:21.855 ], 00:08:21.855 "core_count": 1 00:08:21.855 } 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63388 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63388 ']' 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63388 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63388 00:08:21.855 killing process with pid 63388 00:08:21.855 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.855 00:08:21.855 Latency(us) 00:08:21.855 [2024-11-26T19:16:20.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.855 [2024-11-26T19:16:20.295Z] =================================================================================================================== 00:08:21.855 [2024-11-26T19:16:20.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63388' 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63388 00:08:21.855 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63388 00:08:22.115 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:22.376 19:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.636 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:22.636 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63020 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63020 00:08:23.204 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63020 Killed "${NVMF_APP[@]}" "$@" 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63537 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63537 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63537 ']' 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.204 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.204 [2024-11-26 19:16:21.437019] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:23.204 [2024-11-26 19:16:21.437111] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.204 [2024-11-26 19:16:21.591627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.463 [2024-11-26 19:16:21.660631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.463 [2024-11-26 19:16:21.660986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.463 [2024-11-26 19:16:21.661168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.463 [2024-11-26 19:16:21.661376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.463 [2024-11-26 19:16:21.661391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.463 [2024-11-26 19:16:21.661841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.463 [2024-11-26 19:16:21.723842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.463 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.463 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:23.464 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.464 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.464 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.464 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.464 19:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.722 [2024-11-26 19:16:22.140345] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:23.722 [2024-11-26 19:16:22.140809] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:23.722 [2024-11-26 19:16:22.141101] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.981 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.240 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00fc9247-9538-490e-afd2-72f6fc6374a9 -t 2000 00:08:24.499 [ 00:08:24.499 { 00:08:24.499 "name": "00fc9247-9538-490e-afd2-72f6fc6374a9", 00:08:24.499 "aliases": [ 00:08:24.499 "lvs/lvol" 00:08:24.499 ], 00:08:24.499 "product_name": "Logical Volume", 00:08:24.499 "block_size": 4096, 00:08:24.499 "num_blocks": 38912, 00:08:24.499 "uuid": "00fc9247-9538-490e-afd2-72f6fc6374a9", 00:08:24.499 "assigned_rate_limits": { 00:08:24.499 "rw_ios_per_sec": 0, 00:08:24.499 "rw_mbytes_per_sec": 0, 00:08:24.499 "r_mbytes_per_sec": 0, 00:08:24.499 "w_mbytes_per_sec": 0 00:08:24.499 }, 00:08:24.499 "claimed": false, 00:08:24.499 "zoned": false, 00:08:24.499 "supported_io_types": { 00:08:24.499 "read": true, 00:08:24.499 "write": true, 00:08:24.499 "unmap": true, 00:08:24.499 "flush": false, 00:08:24.499 "reset": true, 00:08:24.499 "nvme_admin": false, 00:08:24.499 "nvme_io": false, 00:08:24.499 "nvme_io_md": false, 00:08:24.499 "write_zeroes": true, 00:08:24.499 "zcopy": false, 00:08:24.499 "get_zone_info": false, 00:08:24.499 "zone_management": false, 00:08:24.499 "zone_append": false, 00:08:24.499 "compare": false, 00:08:24.499 "compare_and_write": false, 00:08:24.499 "abort": false, 00:08:24.499 "seek_hole": true, 00:08:24.499 "seek_data": true, 00:08:24.499 "copy": false, 00:08:24.499 "nvme_iov_md": false 00:08:24.499 }, 00:08:24.499 "driver_specific": { 00:08:24.499 "lvol": { 00:08:24.499 "lvol_store_uuid": "4992378c-2d59-4480-885f-a9a77d208c48", 00:08:24.499 "base_bdev": "aio_bdev", 00:08:24.499 "thin_provision": false, 00:08:24.499 "num_allocated_clusters": 38, 00:08:24.499 "snapshot": false, 00:08:24.499 "clone": false, 00:08:24.499 "esnap_clone": false 00:08:24.499 } 00:08:24.499 } 00:08:24.499 } 00:08:24.499 ] 00:08:24.499 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:24.499 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:24.499 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:24.756 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:24.756 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:24.756 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:25.015 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:25.015 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.582 [2024-11-26 19:16:23.745964] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:25.582 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:25.841 request: 00:08:25.841 { 00:08:25.841 "uuid": "4992378c-2d59-4480-885f-a9a77d208c48", 00:08:25.841 "method": "bdev_lvol_get_lvstores", 00:08:25.841 "req_id": 1 00:08:25.841 } 00:08:25.841 Got JSON-RPC error response 00:08:25.841 response: 00:08:25.841 { 00:08:25.841 "code": -19, 00:08:25.841 "message": "No such device" 00:08:25.841 } 00:08:25.841 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:25.841 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.841 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.841 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.841 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.099 aio_bdev 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.099 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.358 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00fc9247-9538-490e-afd2-72f6fc6374a9 -t 2000 00:08:26.617 [ 00:08:26.617 { 00:08:26.617 "name": "00fc9247-9538-490e-afd2-72f6fc6374a9", 00:08:26.617 "aliases": [ 00:08:26.617 "lvs/lvol" 00:08:26.617 ], 00:08:26.617 "product_name": "Logical Volume", 00:08:26.617 "block_size": 4096, 00:08:26.617 "num_blocks": 38912, 00:08:26.617 "uuid": "00fc9247-9538-490e-afd2-72f6fc6374a9", 00:08:26.617 "assigned_rate_limits": { 00:08:26.617 "rw_ios_per_sec": 0, 00:08:26.617 "rw_mbytes_per_sec": 0, 00:08:26.617 "r_mbytes_per_sec": 0, 00:08:26.617 "w_mbytes_per_sec": 0 00:08:26.617 }, 00:08:26.617 "claimed": false, 00:08:26.617 "zoned": false, 00:08:26.617 "supported_io_types": { 00:08:26.617 "read": true, 00:08:26.617 "write": true, 00:08:26.617 "unmap": true, 00:08:26.617 "flush": false, 00:08:26.617 "reset": true, 00:08:26.617 "nvme_admin": false, 00:08:26.617 "nvme_io": false, 00:08:26.617 "nvme_io_md": false, 00:08:26.617 "write_zeroes": true, 00:08:26.617 "zcopy": false, 00:08:26.617 "get_zone_info": false, 00:08:26.617 "zone_management": false, 00:08:26.617 "zone_append": false, 00:08:26.617 "compare": false, 00:08:26.617 "compare_and_write": false, 00:08:26.617 "abort": false, 00:08:26.617 "seek_hole": true, 00:08:26.617 "seek_data": true, 00:08:26.617 "copy": false, 00:08:26.617 "nvme_iov_md": false 00:08:26.617 }, 00:08:26.617 "driver_specific": { 00:08:26.617 "lvol": { 00:08:26.617 "lvol_store_uuid": "4992378c-2d59-4480-885f-a9a77d208c48", 00:08:26.617 "base_bdev": "aio_bdev", 00:08:26.617 "thin_provision": false, 00:08:26.617 "num_allocated_clusters": 38, 00:08:26.617 "snapshot": false, 00:08:26.617 "clone": false, 00:08:26.617 "esnap_clone": false 00:08:26.617 } 00:08:26.617 } 00:08:26.617 } 00:08:26.617 ] 00:08:26.617 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:26.617 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:26.617 19:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.876 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.876 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:26.876 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.444 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.444 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 00fc9247-9538-490e-afd2-72f6fc6374a9 00:08:27.703 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4992378c-2d59-4480-885f-a9a77d208c48 00:08:27.962 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.222 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.481 ************************************ 00:08:28.481 END TEST lvs_grow_dirty 00:08:28.481 ************************************ 00:08:28.481 00:08:28.481 real 0m21.159s 00:08:28.481 user 0m43.584s 00:08:28.481 sys 0m8.778s 00:08:28.481 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.481 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.740 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:28.741 nvmf_trace.0 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.741 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:29.003 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.003 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:29.003 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.003 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.003 rmmod nvme_tcp 00:08:29.003 rmmod nvme_fabrics 00:08:29.263 rmmod nvme_keyring 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63537 ']' 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63537 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63537 ']' 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63537 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63537 00:08:29.263 killing process with pid 63537 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63537' 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63537 00:08:29.263 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63537 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:29.522 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.523 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.782 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:29.782 ************************************ 00:08:29.782 END TEST nvmf_lvs_grow 00:08:29.782 ************************************ 00:08:29.782 00:08:29.782 real 0m42.662s 00:08:29.782 user 1m7.714s 00:08:29.782 sys 0m12.485s 00:08:29.782 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.782 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.782 ************************************ 00:08:29.782 START TEST nvmf_bdev_io_wait 00:08:29.782 ************************************ 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.782 * Looking for test storage... 00:08:29.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.782 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.783 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.042 --rc genhtml_branch_coverage=1 00:08:30.042 --rc genhtml_function_coverage=1 00:08:30.042 --rc genhtml_legend=1 00:08:30.042 --rc geninfo_all_blocks=1 00:08:30.042 --rc geninfo_unexecuted_blocks=1 00:08:30.042 00:08:30.042 ' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.042 --rc genhtml_branch_coverage=1 00:08:30.042 --rc genhtml_function_coverage=1 00:08:30.042 --rc genhtml_legend=1 00:08:30.042 --rc geninfo_all_blocks=1 00:08:30.042 --rc geninfo_unexecuted_blocks=1 00:08:30.042 00:08:30.042 ' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.042 --rc genhtml_branch_coverage=1 00:08:30.042 --rc genhtml_function_coverage=1 00:08:30.042 --rc genhtml_legend=1 00:08:30.042 --rc geninfo_all_blocks=1 00:08:30.042 --rc geninfo_unexecuted_blocks=1 00:08:30.042 00:08:30.042 ' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.042 --rc genhtml_branch_coverage=1 00:08:30.042 --rc genhtml_function_coverage=1 00:08:30.042 --rc genhtml_legend=1 00:08:30.042 --rc geninfo_all_blocks=1 00:08:30.042 --rc geninfo_unexecuted_blocks=1 00:08:30.042 00:08:30.042 ' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:30.042 Cannot find device "nvmf_init_br" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:30.042 Cannot find device "nvmf_init_br2" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:30.042 Cannot find device "nvmf_tgt_br" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.042 Cannot find device "nvmf_tgt_br2" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:30.042 Cannot find device "nvmf_init_br" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:30.042 Cannot find device "nvmf_init_br2" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:30.042 Cannot find device "nvmf_tgt_br" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:30.042 Cannot find device "nvmf_tgt_br2" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:30.042 Cannot find device "nvmf_br" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:30.042 Cannot find device "nvmf_init_if" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:30.042 Cannot find device "nvmf_init_if2" 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:30.042 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.043 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:30.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:30.301 00:08:30.301 --- 10.0.0.3 ping statistics --- 00:08:30.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.301 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:30.301 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:30.301 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:30.301 00:08:30.301 --- 10.0.0.4 ping statistics --- 00:08:30.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.301 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:30.301 00:08:30.301 --- 10.0.0.1 ping statistics --- 00:08:30.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.301 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:30.301 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:30.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:30.302 00:08:30.302 --- 10.0.0.2 ping statistics --- 00:08:30.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.302 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63912 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63912 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63912 ']' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.302 19:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.302 [2024-11-26 19:16:28.737154] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:30.302 [2024-11-26 19:16:28.737503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.560 [2024-11-26 19:16:28.892841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.560 [2024-11-26 19:16:28.965046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.560 [2024-11-26 19:16:28.965274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.560 [2024-11-26 19:16:28.965296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.560 [2024-11-26 19:16:28.965307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.560 [2024-11-26 19:16:28.965316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.560 [2024-11-26 19:16:28.966551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.560 [2024-11-26 19:16:28.967038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.560 [2024-11-26 19:16:28.967185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.560 [2024-11-26 19:16:28.967193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 [2024-11-26 19:16:29.117481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 [2024-11-26 19:16:29.134335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 Malloc0 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 [2024-11-26 19:16:29.192165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63945 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63947 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.819 { 00:08:30.819 "params": { 00:08:30.819 "name": "Nvme$subsystem", 00:08:30.819 "trtype": "$TEST_TRANSPORT", 00:08:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.819 "adrfam": "ipv4", 00:08:30.819 "trsvcid": "$NVMF_PORT", 00:08:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.819 "hdgst": ${hdgst:-false}, 00:08:30.819 "ddgst": ${ddgst:-false} 00:08:30.819 }, 00:08:30.819 "method": "bdev_nvme_attach_controller" 00:08:30.819 } 00:08:30.819 EOF 00:08:30.819 )") 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63949 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.819 { 00:08:30.819 "params": { 00:08:30.819 "name": "Nvme$subsystem", 00:08:30.819 "trtype": "$TEST_TRANSPORT", 00:08:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.819 "adrfam": "ipv4", 00:08:30.819 "trsvcid": "$NVMF_PORT", 00:08:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.819 "hdgst": ${hdgst:-false}, 00:08:30.819 "ddgst": ${ddgst:-false} 00:08:30.819 }, 00:08:30.819 "method": "bdev_nvme_attach_controller" 00:08:30.819 } 00:08:30.819 EOF 00:08:30.819 )") 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63952 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.819 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.819 { 00:08:30.819 "params": { 00:08:30.819 "name": "Nvme$subsystem", 00:08:30.819 "trtype": "$TEST_TRANSPORT", 00:08:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.819 "adrfam": "ipv4", 00:08:30.819 "trsvcid": "$NVMF_PORT", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.820 "hdgst": ${hdgst:-false}, 00:08:30.820 "ddgst": ${ddgst:-false} 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 } 00:08:30.820 EOF 00:08:30.820 )") 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.820 { 00:08:30.820 "params": { 00:08:30.820 "name": "Nvme$subsystem", 00:08:30.820 "trtype": "$TEST_TRANSPORT", 00:08:30.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.820 "adrfam": "ipv4", 00:08:30.820 "trsvcid": "$NVMF_PORT", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.820 "hdgst": ${hdgst:-false}, 00:08:30.820 "ddgst": ${ddgst:-false} 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 } 00:08:30.820 EOF 00:08:30.820 )") 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.820 "params": { 00:08:30.820 "name": "Nvme1", 00:08:30.820 "trtype": "tcp", 00:08:30.820 "traddr": "10.0.0.3", 00:08:30.820 "adrfam": "ipv4", 00:08:30.820 "trsvcid": "4420", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.820 "hdgst": false, 00:08:30.820 "ddgst": false 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 }' 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.820 "params": { 00:08:30.820 "name": "Nvme1", 00:08:30.820 "trtype": "tcp", 00:08:30.820 "traddr": "10.0.0.3", 00:08:30.820 "adrfam": "ipv4", 00:08:30.820 "trsvcid": "4420", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.820 "hdgst": false, 00:08:30.820 "ddgst": false 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 }' 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.820 "params": { 00:08:30.820 "name": "Nvme1", 00:08:30.820 "trtype": "tcp", 00:08:30.820 "traddr": "10.0.0.3", 00:08:30.820 "adrfam": "ipv4", 00:08:30.820 "trsvcid": "4420", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.820 "hdgst": false, 00:08:30.820 "ddgst": false 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 }' 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.820 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.820 "params": { 00:08:30.820 "name": "Nvme1", 00:08:30.820 "trtype": "tcp", 00:08:30.820 "traddr": "10.0.0.3", 00:08:30.820 "adrfam": "ipv4", 00:08:30.820 "trsvcid": "4420", 00:08:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.820 "hdgst": false, 00:08:30.820 "ddgst": false 00:08:30.820 }, 00:08:30.820 "method": "bdev_nvme_attach_controller" 00:08:30.820 }' 00:08:31.079 [2024-11-26 19:16:29.260860] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:31.079 [2024-11-26 19:16:29.261485] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.079 [2024-11-26 19:16:29.263957] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:31.079 [2024-11-26 19:16:29.264214] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:31.079 19:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63945 00:08:31.079 [2024-11-26 19:16:29.287464] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:31.079 [2024-11-26 19:16:29.287691] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:31.079 [2024-11-26 19:16:29.289149] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:31.079 [2024-11-26 19:16:29.289224] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:31.079 [2024-11-26 19:16:29.494924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.337 [2024-11-26 19:16:29.549171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:31.337 [2024-11-26 19:16:29.563203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.337 [2024-11-26 19:16:29.578482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.337 [2024-11-26 19:16:29.632112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:31.337 [2024-11-26 19:16:29.643546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.337 [2024-11-26 19:16:29.645977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.337 [2024-11-26 19:16:29.698621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:31.337 [2024-11-26 19:16:29.713108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.337 Running I/O for 1 seconds... 00:08:31.337 [2024-11-26 19:16:29.739213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.337 Running I/O for 1 seconds... 00:08:31.595 [2024-11-26 19:16:29.791877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:31.595 [2024-11-26 19:16:29.805751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.595 Running I/O for 1 seconds... 00:08:31.595 Running I/O for 1 seconds... 00:08:32.527 148112.00 IOPS, 578.56 MiB/s 00:08:32.527 Latency(us) 00:08:32.527 [2024-11-26T19:16:30.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.527 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:32.527 Nvme1n1 : 1.00 147786.89 577.29 0.00 0.00 861.31 407.74 2189.50 00:08:32.527 [2024-11-26T19:16:30.967Z] =================================================================================================================== 00:08:32.527 [2024-11-26T19:16:30.967Z] Total : 147786.89 577.29 0.00 0.00 861.31 407.74 2189.50 00:08:32.527 10607.00 IOPS, 41.43 MiB/s 00:08:32.527 Latency(us) 00:08:32.527 [2024-11-26T19:16:30.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.527 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:32.527 Nvme1n1 : 1.01 10669.38 41.68 0.00 0.00 11951.69 6940.86 19779.96 00:08:32.527 [2024-11-26T19:16:30.967Z] =================================================================================================================== 00:08:32.527 [2024-11-26T19:16:30.967Z] Total : 10669.38 41.68 0.00 0.00 11951.69 6940.86 19779.96 00:08:32.527 7108.00 IOPS, 27.77 MiB/s 00:08:32.527 Latency(us) 00:08:32.527 [2024-11-26T19:16:30.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.527 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:32.527 Nvme1n1 : 1.01 7151.02 27.93 0.00 0.00 17785.81 10307.03 25976.09 00:08:32.527 [2024-11-26T19:16:30.967Z] =================================================================================================================== 00:08:32.527 [2024-11-26T19:16:30.967Z] Total : 7151.02 27.93 0.00 0.00 17785.81 10307.03 25976.09 00:08:32.527 7556.00 IOPS, 29.52 MiB/s 00:08:32.527 Latency(us) 00:08:32.527 [2024-11-26T19:16:30.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.527 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:32.527 Nvme1n1 : 1.01 7620.73 29.77 0.00 0.00 16711.58 5093.93 26333.56 00:08:32.527 [2024-11-26T19:16:30.967Z] =================================================================================================================== 00:08:32.527 [2024-11-26T19:16:30.967Z] Total : 7620.73 29.77 0.00 0.00 16711.58 5093.93 26333.56 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63947 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63949 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63952 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.785 rmmod nvme_tcp 00:08:32.785 rmmod nvme_fabrics 00:08:32.785 rmmod nvme_keyring 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63912 ']' 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63912 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63912 ']' 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63912 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.785 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63912 00:08:33.043 killing process with pid 63912 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63912' 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63912 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63912 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:33.043 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.301 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:33.302 00:08:33.302 real 0m3.605s 00:08:33.302 user 0m13.976s 00:08:33.302 sys 0m2.472s 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.302 ************************************ 00:08:33.302 END TEST nvmf_bdev_io_wait 00:08:33.302 ************************************ 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.302 ************************************ 00:08:33.302 START TEST nvmf_queue_depth 00:08:33.302 ************************************ 00:08:33.302 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:33.562 * Looking for test storage... 00:08:33.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.562 --rc genhtml_branch_coverage=1 00:08:33.562 --rc genhtml_function_coverage=1 00:08:33.562 --rc genhtml_legend=1 00:08:33.562 --rc geninfo_all_blocks=1 00:08:33.562 --rc geninfo_unexecuted_blocks=1 00:08:33.562 00:08:33.562 ' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.562 --rc genhtml_branch_coverage=1 00:08:33.562 --rc genhtml_function_coverage=1 00:08:33.562 --rc genhtml_legend=1 00:08:33.562 --rc geninfo_all_blocks=1 00:08:33.562 --rc geninfo_unexecuted_blocks=1 00:08:33.562 00:08:33.562 ' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.562 --rc genhtml_branch_coverage=1 00:08:33.562 --rc genhtml_function_coverage=1 00:08:33.562 --rc genhtml_legend=1 00:08:33.562 --rc geninfo_all_blocks=1 00:08:33.562 --rc geninfo_unexecuted_blocks=1 00:08:33.562 00:08:33.562 ' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.562 --rc genhtml_branch_coverage=1 00:08:33.562 --rc genhtml_function_coverage=1 00:08:33.562 --rc genhtml_legend=1 00:08:33.562 --rc geninfo_all_blocks=1 00:08:33.562 --rc geninfo_unexecuted_blocks=1 00:08:33.562 00:08:33.562 ' 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.562 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.563 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.563 Cannot find device "nvmf_init_br" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.563 Cannot find device "nvmf_init_br2" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.563 Cannot find device "nvmf_tgt_br" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.563 Cannot find device "nvmf_tgt_br2" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.563 Cannot find device "nvmf_init_br" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.563 Cannot find device "nvmf_init_br2" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.563 Cannot find device "nvmf_tgt_br" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.563 Cannot find device "nvmf_tgt_br2" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.563 Cannot find device "nvmf_br" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.563 Cannot find device "nvmf_init_if" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.563 Cannot find device "nvmf_init_if2" 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:33.563 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.564 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:08:33.822 00:08:33.822 --- 10.0.0.3 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.822 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:33.822 00:08:33.822 --- 10.0.0.4 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:33.822 00:08:33.822 --- 10.0.0.1 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:33.822 00:08:33.822 --- 10.0.0.2 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.822 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64214 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64214 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64214 ']' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.823 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.080 [2024-11-26 19:16:32.294466] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:34.080 [2024-11-26 19:16:32.294725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.080 [2024-11-26 19:16:32.439822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.080 [2024-11-26 19:16:32.512688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.080 [2024-11-26 19:16:32.512758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.080 [2024-11-26 19:16:32.512773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.080 [2024-11-26 19:16:32.512785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.080 [2024-11-26 19:16:32.512795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.080 [2024-11-26 19:16:32.513277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.339 [2024-11-26 19:16:32.569716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 [2024-11-26 19:16:32.670188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 Malloc0 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 [2024-11-26 19:16:32.717411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64233 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64233 /var/tmp/bdevperf.sock 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64233 ']' 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.339 19:16:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 [2024-11-26 19:16:32.767328] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:34.339 [2024-11-26 19:16:32.767419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64233 ] 00:08:34.598 [2024-11-26 19:16:32.930861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.598 [2024-11-26 19:16:33.005046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.856 [2024-11-26 19:16:33.067069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.423 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.423 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:35.423 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:35.423 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.423 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.682 NVMe0n1 00:08:35.682 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.682 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.682 Running I/O for 10 seconds... 00:08:37.553 6144.00 IOPS, 24.00 MiB/s [2024-11-26T19:16:37.370Z] 6650.00 IOPS, 25.98 MiB/s [2024-11-26T19:16:38.307Z] 6850.67 IOPS, 26.76 MiB/s [2024-11-26T19:16:39.243Z] 6920.50 IOPS, 27.03 MiB/s [2024-11-26T19:16:40.179Z] 6851.80 IOPS, 26.76 MiB/s [2024-11-26T19:16:41.114Z] 6946.67 IOPS, 27.14 MiB/s [2024-11-26T19:16:42.051Z] 7043.43 IOPS, 27.51 MiB/s [2024-11-26T19:16:43.430Z] 7141.88 IOPS, 27.90 MiB/s [2024-11-26T19:16:43.998Z] 7182.33 IOPS, 28.06 MiB/s [2024-11-26T19:16:44.258Z] 7186.40 IOPS, 28.07 MiB/s 00:08:45.818 Latency(us) 00:08:45.818 [2024-11-26T19:16:44.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.818 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:45.818 Verification LBA range: start 0x0 length 0x4000 00:08:45.818 NVMe0n1 : 10.08 7226.30 28.23 0.00 0.00 141027.68 16086.11 102474.47 00:08:45.818 [2024-11-26T19:16:44.258Z] =================================================================================================================== 00:08:45.818 [2024-11-26T19:16:44.258Z] Total : 7226.30 28.23 0.00 0.00 141027.68 16086.11 102474.47 00:08:45.818 { 00:08:45.818 "results": [ 00:08:45.818 { 00:08:45.818 "job": "NVMe0n1", 00:08:45.818 "core_mask": "0x1", 00:08:45.818 "workload": "verify", 00:08:45.818 "status": "finished", 00:08:45.818 "verify_range": { 00:08:45.818 "start": 0, 00:08:45.818 "length": 16384 00:08:45.818 }, 00:08:45.818 "queue_depth": 1024, 00:08:45.818 "io_size": 4096, 00:08:45.818 "runtime": 10.082056, 00:08:45.818 "iops": 7226.303841200644, 00:08:45.818 "mibps": 28.227749379690014, 00:08:45.818 "io_failed": 0, 00:08:45.818 "io_timeout": 0, 00:08:45.818 "avg_latency_us": 141027.68296595025, 00:08:45.818 "min_latency_us": 16086.10909090909, 00:08:45.818 "max_latency_us": 102474.47272727273 00:08:45.818 } 00:08:45.818 ], 00:08:45.818 "core_count": 1 00:08:45.818 } 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64233 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64233 ']' 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64233 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.818 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64233 00:08:45.818 killing process with pid 64233 00:08:45.818 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.818 00:08:45.819 Latency(us) 00:08:45.819 [2024-11-26T19:16:44.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.819 [2024-11-26T19:16:44.259Z] =================================================================================================================== 00:08:45.819 [2024-11-26T19:16:44.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.819 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.819 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.819 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64233' 00:08:45.819 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64233 00:08:45.819 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64233 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.078 rmmod nvme_tcp 00:08:46.078 rmmod nvme_fabrics 00:08:46.078 rmmod nvme_keyring 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64214 ']' 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64214 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64214 ']' 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64214 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64214 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64214' 00:08:46.078 killing process with pid 64214 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64214 00:08:46.078 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64214 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:46.338 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:46.597 00:08:46.597 real 0m13.309s 00:08:46.597 user 0m23.078s 00:08:46.597 sys 0m2.203s 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.597 19:16:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.597 ************************************ 00:08:46.597 END TEST nvmf_queue_depth 00:08:46.597 ************************************ 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.856 ************************************ 00:08:46.856 START TEST nvmf_target_multipath 00:08:46.856 ************************************ 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.856 * Looking for test storage... 00:08:46.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.856 --rc genhtml_branch_coverage=1 00:08:46.856 --rc genhtml_function_coverage=1 00:08:46.856 --rc genhtml_legend=1 00:08:46.856 --rc geninfo_all_blocks=1 00:08:46.856 --rc geninfo_unexecuted_blocks=1 00:08:46.856 00:08:46.856 ' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.856 --rc genhtml_branch_coverage=1 00:08:46.856 --rc genhtml_function_coverage=1 00:08:46.856 --rc genhtml_legend=1 00:08:46.856 --rc geninfo_all_blocks=1 00:08:46.856 --rc geninfo_unexecuted_blocks=1 00:08:46.856 00:08:46.856 ' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.856 --rc genhtml_branch_coverage=1 00:08:46.856 --rc genhtml_function_coverage=1 00:08:46.856 --rc genhtml_legend=1 00:08:46.856 --rc geninfo_all_blocks=1 00:08:46.856 --rc geninfo_unexecuted_blocks=1 00:08:46.856 00:08:46.856 ' 00:08:46.856 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.856 --rc genhtml_branch_coverage=1 00:08:46.856 --rc genhtml_function_coverage=1 00:08:46.856 --rc genhtml_legend=1 00:08:46.856 --rc geninfo_all_blocks=1 00:08:46.857 --rc geninfo_unexecuted_blocks=1 00:08:46.857 00:08:46.857 ' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.857 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:46.857 Cannot find device "nvmf_init_br" 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:46.857 Cannot find device "nvmf_init_br2" 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:46.857 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:47.115 Cannot find device "nvmf_tgt_br" 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.115 Cannot find device "nvmf_tgt_br2" 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:47.115 Cannot find device "nvmf_init_br" 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:47.115 Cannot find device "nvmf_init_br2" 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:47.115 Cannot find device "nvmf_tgt_br" 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:47.115 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:47.115 Cannot find device "nvmf_tgt_br2" 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:47.116 Cannot find device "nvmf_br" 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:47.116 Cannot find device "nvmf_init_if" 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:47.116 Cannot find device "nvmf_init_if2" 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:47.116 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:47.374 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:47.374 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:47.374 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:47.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:47.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:47.375 00:08:47.375 --- 10.0.0.3 ping statistics --- 00:08:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.375 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:47.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:47.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:47.375 00:08:47.375 --- 10.0.0.4 ping statistics --- 00:08:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.375 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:47.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:47.375 00:08:47.375 --- 10.0.0.1 ping statistics --- 00:08:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.375 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:47.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:47.375 00:08:47.375 --- 10.0.0.2 ping statistics --- 00:08:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.375 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64617 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64617 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64617 ']' 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.375 19:16:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.375 [2024-11-26 19:16:45.732473] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:08:47.375 [2024-11-26 19:16:45.732608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.634 [2024-11-26 19:16:45.886320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.634 [2024-11-26 19:16:45.955816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.634 [2024-11-26 19:16:45.955912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.634 [2024-11-26 19:16:45.955929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.634 [2024-11-26 19:16:45.955940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.634 [2024-11-26 19:16:45.955949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.634 [2024-11-26 19:16:45.957228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.634 [2024-11-26 19:16:45.957271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.634 [2024-11-26 19:16:45.957415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.634 [2024-11-26 19:16:45.957421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.634 [2024-11-26 19:16:46.016767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.891 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.149 [2024-11-26 19:16:46.409235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.149 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:48.406 Malloc0 00:08:48.406 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:48.664 19:16:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.922 19:16:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:49.180 [2024-11-26 19:16:47.586022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.180 19:16:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:49.438 [2024-11-26 19:16:47.866311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:49.696 19:16:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:49.696 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:49.954 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.954 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:49.954 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.954 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:49.954 19:16:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64699 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:51.937 19:16:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:51.937 [global] 00:08:51.937 thread=1 00:08:51.937 invalidate=1 00:08:51.937 rw=randrw 00:08:51.937 time_based=1 00:08:51.937 runtime=6 00:08:51.937 ioengine=libaio 00:08:51.937 direct=1 00:08:51.937 bs=4096 00:08:51.937 iodepth=128 00:08:51.937 norandommap=0 00:08:51.937 numjobs=1 00:08:51.937 00:08:51.937 verify_dump=1 00:08:51.937 verify_backlog=512 00:08:51.937 verify_state_save=0 00:08:51.937 do_verify=1 00:08:51.937 verify=crc32c-intel 00:08:51.937 [job0] 00:08:51.937 filename=/dev/nvme0n1 00:08:51.937 Could not set queue depth (nvme0n1) 00:08:52.196 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.196 fio-3.35 00:08:52.196 Starting 1 thread 00:08:53.132 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:53.132 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:53.390 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:53.649 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:54.215 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64699 00:08:58.401 00:08:58.402 job0: (groupid=0, jobs=1): err= 0: pid=64725: Tue Nov 26 19:16:56 2024 00:08:58.402 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(235MiB/6007msec) 00:08:58.402 slat (usec): min=7, max=8272, avg=58.90, stdev=229.97 00:08:58.402 clat (usec): min=1797, max=17372, avg=8684.07, stdev=1435.33 00:08:58.402 lat (usec): min=1812, max=17402, avg=8742.96, stdev=1437.83 00:08:58.402 clat percentiles (usec): 00:08:58.402 | 1.00th=[ 4686], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 7963], 00:08:58.402 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:08:58.402 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[11994], 00:08:58.402 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14353], 99.95th=[14746], 00:08:58.402 | 99.99th=[15270] 00:08:58.402 bw ( KiB/s): min= 7920, max=25208, per=51.07%, avg=20491.64, stdev=6463.79, samples=11 00:08:58.402 iops : min= 1980, max= 6302, avg=5122.91, stdev=1615.95, samples=11 00:08:58.402 write: IOPS=6039, BW=23.6MiB/s (24.7MB/s)(122MiB/5192msec); 0 zone resets 00:08:58.402 slat (usec): min=17, max=2401, avg=67.87, stdev=164.44 00:08:58.402 clat (usec): min=1988, max=15110, avg=7611.30, stdev=1279.95 00:08:58.402 lat (usec): min=2016, max=15146, avg=7679.17, stdev=1284.30 00:08:58.402 clat percentiles (usec): 00:08:58.402 | 1.00th=[ 3621], 5.00th=[ 4621], 10.00th=[ 6325], 20.00th=[ 7111], 00:08:58.402 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:08:58.402 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8979], 00:08:58.402 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13960], 99.95th=[14222], 00:08:58.402 | 99.99th=[14484] 00:08:58.402 bw ( KiB/s): min= 8168, max=24880, per=85.23%, avg=20589.82, stdev=6287.20, samples=11 00:08:58.402 iops : min= 2042, max= 6220, avg=5147.45, stdev=1571.80, samples=11 00:08:58.402 lat (msec) : 2=0.01%, 4=1.20%, 10=92.38%, 20=6.41% 00:08:58.402 cpu : usr=5.71%, sys=21.06%, ctx=5354, majf=0, minf=108 00:08:58.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:58.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:58.402 issued rwts: total=60252,31355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:58.402 00:08:58.402 Run status group 0 (all jobs): 00:08:58.402 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=235MiB (247MB), run=6007-6007msec 00:08:58.402 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=122MiB (128MB), run=5192-5192msec 00:08:58.402 00:08:58.402 Disk stats (read/write): 00:08:58.402 nvme0n1: ios=59602/30548, merge=0/0, ticks=498177/218943, in_queue=717120, util=98.68% 00:08:58.402 19:16:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:58.660 19:16:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64806 00:08:58.919 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:58.919 [global] 00:08:58.919 thread=1 00:08:58.919 invalidate=1 00:08:58.919 rw=randrw 00:08:58.919 time_based=1 00:08:58.919 runtime=6 00:08:58.919 ioengine=libaio 00:08:58.919 direct=1 00:08:58.919 bs=4096 00:08:58.919 iodepth=128 00:08:58.919 norandommap=0 00:08:58.919 numjobs=1 00:08:58.919 00:08:58.919 verify_dump=1 00:08:58.919 verify_backlog=512 00:08:58.919 verify_state_save=0 00:08:58.919 do_verify=1 00:08:58.919 verify=crc32c-intel 00:08:58.919 [job0] 00:08:58.919 filename=/dev/nvme0n1 00:08:58.919 Could not set queue depth (nvme0n1) 00:08:58.919 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.919 fio-3.35 00:08:58.919 Starting 1 thread 00:08:59.859 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:00.117 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.376 19:16:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:00.635 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.894 19:16:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64806 00:09:05.086 00:09:05.086 job0: (groupid=0, jobs=1): err= 0: pid=64827: Tue Nov 26 19:17:03 2024 00:09:05.086 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(260MiB/6005msec) 00:09:05.086 slat (usec): min=6, max=15575, avg=44.85, stdev=204.32 00:09:05.086 clat (usec): min=301, max=23199, avg=7834.65, stdev=2037.21 00:09:05.086 lat (usec): min=313, max=23219, avg=7879.50, stdev=2052.98 00:09:05.086 clat percentiles (usec): 00:09:05.086 | 1.00th=[ 2802], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 6128], 00:09:05.086 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8455], 00:09:05.086 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11731], 00:09:05.086 | 99.00th=[13304], 99.50th=[13960], 99.90th=[16909], 99.95th=[16909], 00:09:05.086 | 99.99th=[16909] 00:09:05.086 bw ( KiB/s): min=11192, max=36800, per=54.21%, avg=24075.73, stdev=6672.46, samples=11 00:09:05.086 iops : min= 2798, max= 9200, avg=6018.91, stdev=1668.13, samples=11 00:09:05.086 write: IOPS=6508, BW=25.4MiB/s (26.7MB/s)(141MiB/5543msec); 0 zone resets 00:09:05.086 slat (usec): min=15, max=1949, avg=55.77, stdev=136.23 00:09:05.086 clat (usec): min=888, max=14612, avg=6653.79, stdev=1862.96 00:09:05.086 lat (usec): min=914, max=14637, avg=6709.56, stdev=1877.17 00:09:05.086 clat percentiles (usec): 00:09:05.086 | 1.00th=[ 2573], 5.00th=[ 3425], 10.00th=[ 3884], 20.00th=[ 4621], 00:09:05.086 | 30.00th=[ 5473], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7570], 00:09:05.086 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:09:05.086 | 99.00th=[10945], 99.50th=[11863], 99.90th=[13173], 99.95th=[13435], 00:09:05.086 | 99.99th=[14222] 00:09:05.086 bw ( KiB/s): min=11488, max=36232, per=92.36%, avg=24046.27, stdev=6519.37, samples=11 00:09:05.086 iops : min= 2872, max= 9058, avg=6011.45, stdev=1629.91, samples=11 00:09:05.086 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.05% 00:09:05.086 lat (msec) : 2=0.47%, 4=5.28%, 10=88.73%, 20=5.41%, 50=0.01% 00:09:05.086 cpu : usr=5.95%, sys=23.80%, ctx=5757, majf=0, minf=90 00:09:05.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:05.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.086 issued rwts: total=66668,36078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.086 00:09:05.086 Run status group 0 (all jobs): 00:09:05.086 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=260MiB (273MB), run=6005-6005msec 00:09:05.086 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=141MiB (148MB), run=5543-5543msec 00:09:05.086 00:09:05.086 Disk stats (read/write): 00:09:05.086 nvme0n1: ios=66061/35272, merge=0/0, ticks=495644/219026, in_queue=714670, util=98.61% 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:05.087 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.656 rmmod nvme_tcp 00:09:05.656 rmmod nvme_fabrics 00:09:05.656 rmmod nvme_keyring 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64617 ']' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64617 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64617 ']' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64617 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64617 00:09:05.656 killing process with pid 64617 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64617' 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64617 00:09:05.656 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64617 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:05.915 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:05.916 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:05.916 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:05.916 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:05.916 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:05.916 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:06.175 ************************************ 00:09:06.175 END TEST nvmf_target_multipath 00:09:06.175 ************************************ 00:09:06.175 00:09:06.175 real 0m19.381s 00:09:06.175 user 1m11.307s 00:09:06.175 sys 0m10.386s 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.175 ************************************ 00:09:06.175 START TEST nvmf_zcopy 00:09:06.175 ************************************ 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.175 * Looking for test storage... 00:09:06.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.175 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.436 --rc genhtml_branch_coverage=1 00:09:06.436 --rc genhtml_function_coverage=1 00:09:06.436 --rc genhtml_legend=1 00:09:06.436 --rc geninfo_all_blocks=1 00:09:06.436 --rc geninfo_unexecuted_blocks=1 00:09:06.436 00:09:06.436 ' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.436 --rc genhtml_branch_coverage=1 00:09:06.436 --rc genhtml_function_coverage=1 00:09:06.436 --rc genhtml_legend=1 00:09:06.436 --rc geninfo_all_blocks=1 00:09:06.436 --rc geninfo_unexecuted_blocks=1 00:09:06.436 00:09:06.436 ' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.436 --rc genhtml_branch_coverage=1 00:09:06.436 --rc genhtml_function_coverage=1 00:09:06.436 --rc genhtml_legend=1 00:09:06.436 --rc geninfo_all_blocks=1 00:09:06.436 --rc geninfo_unexecuted_blocks=1 00:09:06.436 00:09:06.436 ' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.436 --rc genhtml_branch_coverage=1 00:09:06.436 --rc genhtml_function_coverage=1 00:09:06.436 --rc genhtml_legend=1 00:09:06.436 --rc geninfo_all_blocks=1 00:09:06.436 --rc geninfo_unexecuted_blocks=1 00:09:06.436 00:09:06.436 ' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.436 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:06.437 Cannot find device "nvmf_init_br" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:06.437 Cannot find device "nvmf_init_br2" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:06.437 Cannot find device "nvmf_tgt_br" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.437 Cannot find device "nvmf_tgt_br2" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:06.437 Cannot find device "nvmf_init_br" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:06.437 Cannot find device "nvmf_init_br2" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:06.437 Cannot find device "nvmf_tgt_br" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:06.437 Cannot find device "nvmf_tgt_br2" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:06.437 Cannot find device "nvmf_br" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:06.437 Cannot find device "nvmf_init_if" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:06.437 Cannot find device "nvmf_init_if2" 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.437 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:06.696 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:06.697 19:17:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:06.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:06.697 00:09:06.697 --- 10.0.0.3 ping statistics --- 00:09:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.697 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:06.697 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:06.697 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:06.697 00:09:06.697 --- 10.0.0.4 ping statistics --- 00:09:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.697 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:06.697 00:09:06.697 --- 10.0.0.1 ping statistics --- 00:09:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.697 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:06.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:06.697 00:09:06.697 --- 10.0.0.2 ping statistics --- 00:09:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.697 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65129 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65129 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65129 ']' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.697 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.957 [2024-11-26 19:17:05.178378] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:06.957 [2024-11-26 19:17:05.178455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.957 [2024-11-26 19:17:05.330671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.957 [2024-11-26 19:17:05.387672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.957 [2024-11-26 19:17:05.387728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.957 [2024-11-26 19:17:05.387743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.957 [2024-11-26 19:17:05.387754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.957 [2024-11-26 19:17:05.387771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.957 [2024-11-26 19:17:05.388243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.217 [2024-11-26 19:17:05.447941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 [2024-11-26 19:17:05.565860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 [2024-11-26 19:17:05.582039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 malloc0 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.217 { 00:09:07.217 "params": { 00:09:07.217 "name": "Nvme$subsystem", 00:09:07.217 "trtype": "$TEST_TRANSPORT", 00:09:07.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.217 "adrfam": "ipv4", 00:09:07.217 "trsvcid": "$NVMF_PORT", 00:09:07.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.217 "hdgst": ${hdgst:-false}, 00:09:07.217 "ddgst": ${ddgst:-false} 00:09:07.217 }, 00:09:07.217 "method": "bdev_nvme_attach_controller" 00:09:07.217 } 00:09:07.217 EOF 00:09:07.217 )") 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:07.217 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.217 "params": { 00:09:07.217 "name": "Nvme1", 00:09:07.217 "trtype": "tcp", 00:09:07.217 "traddr": "10.0.0.3", 00:09:07.217 "adrfam": "ipv4", 00:09:07.217 "trsvcid": "4420", 00:09:07.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.217 "hdgst": false, 00:09:07.217 "ddgst": false 00:09:07.217 }, 00:09:07.217 "method": "bdev_nvme_attach_controller" 00:09:07.217 }' 00:09:07.490 [2024-11-26 19:17:05.688522] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:07.490 [2024-11-26 19:17:05.688633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65154 ] 00:09:07.490 [2024-11-26 19:17:05.840235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.490 [2024-11-26 19:17:05.901598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.762 [2024-11-26 19:17:05.971919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.762 Running I/O for 10 seconds... 00:09:10.078 5363.00 IOPS, 41.90 MiB/s [2024-11-26T19:17:09.455Z] 5617.00 IOPS, 43.88 MiB/s [2024-11-26T19:17:10.393Z] 5813.00 IOPS, 45.41 MiB/s [2024-11-26T19:17:11.331Z] 5977.50 IOPS, 46.70 MiB/s [2024-11-26T19:17:12.268Z] 6061.80 IOPS, 47.36 MiB/s [2024-11-26T19:17:13.204Z] 6088.00 IOPS, 47.56 MiB/s [2024-11-26T19:17:14.144Z] 6197.43 IOPS, 48.42 MiB/s [2024-11-26T19:17:15.520Z] 6217.62 IOPS, 48.58 MiB/s [2024-11-26T19:17:16.455Z] 6228.78 IOPS, 48.66 MiB/s [2024-11-26T19:17:16.455Z] 6237.90 IOPS, 48.73 MiB/s 00:09:18.015 Latency(us) 00:09:18.015 [2024-11-26T19:17:16.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.015 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:18.015 Verification LBA range: start 0x0 length 0x1000 00:09:18.015 Nvme1n1 : 10.02 6238.68 48.74 0.00 0.00 20452.94 1876.71 40513.16 00:09:18.015 [2024-11-26T19:17:16.456Z] =================================================================================================================== 00:09:18.016 [2024-11-26T19:17:16.456Z] Total : 6238.68 48.74 0.00 0.00 20452.94 1876.71 40513.16 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65277 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.016 { 00:09:18.016 "params": { 00:09:18.016 "name": "Nvme$subsystem", 00:09:18.016 "trtype": "$TEST_TRANSPORT", 00:09:18.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.016 "adrfam": "ipv4", 00:09:18.016 "trsvcid": "$NVMF_PORT", 00:09:18.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.016 "hdgst": ${hdgst:-false}, 00:09:18.016 "ddgst": ${ddgst:-false} 00:09:18.016 }, 00:09:18.016 "method": "bdev_nvme_attach_controller" 00:09:18.016 } 00:09:18.016 EOF 00:09:18.016 )") 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:18.016 [2024-11-26 19:17:16.321940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.322001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:18.016 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.016 "params": { 00:09:18.016 "name": "Nvme1", 00:09:18.016 "trtype": "tcp", 00:09:18.016 "traddr": "10.0.0.3", 00:09:18.016 "adrfam": "ipv4", 00:09:18.016 "trsvcid": "4420", 00:09:18.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.016 "hdgst": false, 00:09:18.016 "ddgst": false 00:09:18.016 }, 00:09:18.016 "method": "bdev_nvme_attach_controller" 00:09:18.016 }' 00:09:18.016 [2024-11-26 19:17:16.333949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.333982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.345920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.345978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.357917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.357952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.361419] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:18.016 [2024-11-26 19:17:16.361671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65277 ] 00:09:18.016 [2024-11-26 19:17:16.369935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.370140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.381952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.382112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.393960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.394112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.405987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.406165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.417929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.418117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.429930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.430115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.016 [2024-11-26 19:17:16.441937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.016 [2024-11-26 19:17:16.442126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.453948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.454169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.465945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.466131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.477946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.478124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.489966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.490146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.501967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.502134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.503576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.275 [2024-11-26 19:17:16.513991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.514237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.525987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.526234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.537978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.538161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.549986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.550185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.561989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.562184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.563753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.275 [2024-11-26 19:17:16.573987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.574168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.586012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.586049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.598009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.598046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.610012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.610048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.622013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.622050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.627634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.275 [2024-11-26 19:17:16.634012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.634042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.646020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.646056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.658004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.658032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.670004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.670031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.682043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.682091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.694056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.694090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.275 [2024-11-26 19:17:16.706058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.275 [2024-11-26 19:17:16.706090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.718075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.718106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.730071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.730101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.742090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.742124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 Running I/O for 5 seconds... 00:09:18.534 [2024-11-26 19:17:16.754115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.754161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.772193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.772401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.787375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.787552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.797294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.797344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.812660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.812694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.829972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.830046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.846962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.847165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.863846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.863945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.879369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.879403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.895042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.895242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.905380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.905414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.920354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.920392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.936071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.936107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.952724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.952756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.534 [2024-11-26 19:17:16.969051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.534 [2024-11-26 19:17:16.969145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:16.986157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:16.986190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.002240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.002321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.019864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.019969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.035189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.035221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.051515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.051551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.067765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.067800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.085575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.085655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.101595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.101658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.119676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.119718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.134232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.134439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.150683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.150731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.166606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.166669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.184447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.184663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.194634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.194685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.210397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.210432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.794 [2024-11-26 19:17:17.225549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.794 [2024-11-26 19:17:17.225583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.241795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.241830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.258691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.258726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.269203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.269237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.283708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.283998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.301180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.301213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.317370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.317438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.334542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.334754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.351386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.351438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.367353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.367388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.376974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.377036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.392748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.392782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.402641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.402691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.419043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.419076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.434762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.434797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.452854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.452890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.467664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.467701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.053 [2024-11-26 19:17:17.483221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.053 [2024-11-26 19:17:17.483270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.500818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.500854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.518248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.518322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.533080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.533112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.549380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.549417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.565860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.565908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.582318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.582353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.597947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.597990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.607349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.607383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.622711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.622759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.639812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.639846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.655170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.655205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.665890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.665955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.681014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.681057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.312 [2024-11-26 19:17:17.695586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.312 [2024-11-26 19:17:17.695638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.313 [2024-11-26 19:17:17.711530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.313 [2024-11-26 19:17:17.711581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.313 [2024-11-26 19:17:17.729399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.313 [2024-11-26 19:17:17.729434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.313 [2024-11-26 19:17:17.744478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.313 [2024-11-26 19:17:17.744656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 11472.00 IOPS, 89.62 MiB/s [2024-11-26T19:17:18.011Z] [2024-11-26 19:17:17.760137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.760174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.769245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.769281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.785788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.785826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.803099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.803314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.819568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.819619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.835415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.835451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.853091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.853126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.868845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.868880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.886522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.886752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.901695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.901875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.918355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.918392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.934876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.934958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.951432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.951468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.968772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.968982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:17.985249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:17.985323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.571 [2024-11-26 19:17:18.002495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.571 [2024-11-26 19:17:18.002531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.018147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.018180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.034507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.034552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.051659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.051692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.068011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.068049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.085364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.085397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.101663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.101717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.118602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.118636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.135716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.135973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.151823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.152057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.168517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.168551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.185388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.185423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.201893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.201953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.218710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.218746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.235474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.235511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.831 [2024-11-26 19:17:18.251777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.831 [2024-11-26 19:17:18.251813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.100 [2024-11-26 19:17:18.269317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.269356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.284399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.284456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.301102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.301140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.317475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.317672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.333969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.334138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.350440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.350588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.369007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.369276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.384558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.384753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.394007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.394184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.101 [2024-11-26 19:17:18.410242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.101 [2024-11-26 19:17:18.410452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.424936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.425189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.440665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.440858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.451088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.451265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.466345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.466547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.482986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.483162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.499621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.499818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.516440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.516631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.102 [2024-11-26 19:17:18.533075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.102 [2024-11-26 19:17:18.533250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.549647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.549851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.565775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.565987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.581817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.582042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.592246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.592436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.607443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.607602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.624013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.624161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.640088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.640236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.658382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.658560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.673006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.673042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.689322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.689361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.706258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.706383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.725498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.725540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.741129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.741163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 11392.00 IOPS, 89.00 MiB/s [2024-11-26T19:17:18.804Z] [2024-11-26 19:17:18.759450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.759486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.775087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.775120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.364 [2024-11-26 19:17:18.791799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.364 [2024-11-26 19:17:18.791833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.809233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.809266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.824353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.824541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.842676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.842714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.857775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.857812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.873429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.873466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.883967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.884003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.899382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.899574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.914755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.915080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.933729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.933797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.948842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.948880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.965865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.965919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.981911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.981944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:18.999224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:18.999264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:19.014822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:19.014859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:19.025479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:19.025693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:19.041710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:19.041747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 [2024-11-26 19:17:19.057216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-11-26 19:17:19.057297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.073664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.073698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.091483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.091519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.106480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.106515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.122115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.122180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.132287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.132517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.148986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.149236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.164459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.164668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.176172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.176336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.192285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.192490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.209155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.209324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.225975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.226143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.242422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.242569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.260069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.260229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.276118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.276267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.292249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.292465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-11-26 19:17:19.309120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-11-26 19:17:19.309298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.141 [2024-11-26 19:17:19.324497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.141 [2024-11-26 19:17:19.324656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.141 [2024-11-26 19:17:19.334763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.141 [2024-11-26 19:17:19.334958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.141 [2024-11-26 19:17:19.352039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.141 [2024-11-26 19:17:19.352188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.141 [2024-11-26 19:17:19.367346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.367492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.377977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.378146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.394048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.394195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.409977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.410177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.426662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.426884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.443118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.443293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.460034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.460069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.476659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.476695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.493207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.493291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.509846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.509884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.527457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.527631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.544693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.544728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.559380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.559416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-11-26 19:17:19.576397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-11-26 19:17:19.576444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.592680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.592717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.610400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.610673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.625852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.626055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.642534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.642570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.659783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.659820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.676703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.676890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.692562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.692605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.709748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.709957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.725782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.725819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.736309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.736485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.752248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.752322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 11035.00 IOPS, 86.21 MiB/s [2024-11-26T19:17:19.840Z] [2024-11-26 19:17:19.768690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.768742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.784182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.784220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.803048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.803093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.818034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.818203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.400 [2024-11-26 19:17:19.833349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.400 [2024-11-26 19:17:19.833513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.843759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.843805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.859791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.859828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.875169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.875205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.890374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.890410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.900476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.900701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.916388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.916425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.932009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.932045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.948323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.948360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.964956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.965010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.981784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.981821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:19.997132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:19.997180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.012908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.012955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.032332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.032390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.048215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.048261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.065881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.065997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.081730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.081812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.659 [2024-11-26 19:17:20.092322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.659 [2024-11-26 19:17:20.092503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.108682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.108724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.123323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.123548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.139004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.139231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.149930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.149993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.165756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.166046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.181346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.181582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.192417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.192599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.207580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.207759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.223511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.223567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.234269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.234432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.249381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.249607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.264088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.264274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.280661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.280714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.297564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.297628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.312860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.312962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.935 [2024-11-26 19:17:20.328861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.935 [2024-11-26 19:17:20.328932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.936 [2024-11-26 19:17:20.347995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.936 [2024-11-26 19:17:20.348069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.936 [2024-11-26 19:17:20.363766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.936 [2024-11-26 19:17:20.363799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.379800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.379833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.397247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.397422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.414369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.414419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.429837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.429873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.446334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.446371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.464235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.464383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.479696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.479761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.490202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.490350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.505466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.505696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.521430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.521468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.531855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.531891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.547806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.547840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.563432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.563470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.581235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.581272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.596732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.596935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.612500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.612734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.195 [2024-11-26 19:17:20.628023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.195 [2024-11-26 19:17:20.628058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.643344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.643379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.653402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.653561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.666333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.666368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.681005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.681209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.691976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.692012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.706979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.707209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.721897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.721990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.737146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.737180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.753154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.753230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 10951.75 IOPS, 85.56 MiB/s [2024-11-26T19:17:20.894Z] [2024-11-26 19:17:20.770471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.770507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.786786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.786859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.803301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.803350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.820472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.820508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.837112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.837141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.854566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.854599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.871175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.871239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.454 [2024-11-26 19:17:20.889236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.454 [2024-11-26 19:17:20.889489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.904862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.905055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.914969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.915005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.931243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.931309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.946683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.946731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.963196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.963245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.980444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.980724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:20.996646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:20.996680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.014471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.014677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.029969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.030198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.045746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.045780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.056384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.056615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.073130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.073162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.087985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.088014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.104440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.104476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.121199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.121237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.713 [2024-11-26 19:17:21.137330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.713 [2024-11-26 19:17:21.137377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.154540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.154690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.170083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.170119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.187611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.187665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.202456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.202492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.211848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.211885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.228140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.228176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.244549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.244585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.262984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.263045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.278015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.278209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.294383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.294421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.310016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.310052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.319500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.972 [2024-11-26 19:17:21.319536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.972 [2024-11-26 19:17:21.336698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.973 [2024-11-26 19:17:21.336860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.973 [2024-11-26 19:17:21.353101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.973 [2024-11-26 19:17:21.353149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.973 [2024-11-26 19:17:21.368262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.973 [2024-11-26 19:17:21.368298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.973 [2024-11-26 19:17:21.384529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.973 [2024-11-26 19:17:21.384578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.973 [2024-11-26 19:17:21.403355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.973 [2024-11-26 19:17:21.403407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.418338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.418371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.433633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.433830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.450501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.450536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.467069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.467101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.485930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.485976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.500706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.500908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.512055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.512092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.527527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.527561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.542739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.542914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.557760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.557967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.574243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.574294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.591487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.591522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.607319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.607354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.624444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.624673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.640617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.640651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.232 [2024-11-26 19:17:21.658307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.232 [2024-11-26 19:17:21.658475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.491 [2024-11-26 19:17:21.674639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.491 [2024-11-26 19:17:21.674673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.491 [2024-11-26 19:17:21.693594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.491 [2024-11-26 19:17:21.693748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.491 [2024-11-26 19:17:21.708880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.491 [2024-11-26 19:17:21.709102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.491 [2024-11-26 19:17:21.725716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.725754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.742899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.742968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 10931.20 IOPS, 85.40 MiB/s [2024-11-26T19:17:21.932Z] [2024-11-26 19:17:21.758953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.759052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 00:09:23.492 Latency(us) 00:09:23.492 [2024-11-26T19:17:21.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.492 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:23.492 Nvme1n1 : 5.01 10929.92 85.39 0.00 0.00 11696.01 4676.89 25022.84 00:09:23.492 [2024-11-26T19:17:21.932Z] =================================================================================================================== 00:09:23.492 [2024-11-26T19:17:21.932Z] Total : 10929.92 85.39 0.00 0.00 11696.01 4676.89 25022.84 00:09:23.492 [2024-11-26 19:17:21.770852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.770887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.782895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.782956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.794915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.794979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.806943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.806991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.818897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.819203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.830950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.831023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.842928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.842995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.854908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.854984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.866899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.866959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.878965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.879030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.890938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.890968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.903004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.903053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.914992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.915033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.492 [2024-11-26 19:17:21.926959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.492 [2024-11-26 19:17:21.926995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.751 [2024-11-26 19:17:21.938984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.751 [2024-11-26 19:17:21.939023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.751 [2024-11-26 19:17:21.951030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.751 [2024-11-26 19:17:21.951063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.751 [2024-11-26 19:17:21.962994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.751 [2024-11-26 19:17:21.963031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.751 [2024-11-26 19:17:21.974969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.751 [2024-11-26 19:17:21.975025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.751 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65277) - No such process 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65277 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.751 delay0 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.751 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.752 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.752 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:24.013 [2024-11-26 19:17:22.193452] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:30.583 Initializing NVMe Controllers 00:09:30.583 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.583 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.583 Initialization complete. Launching workers. 00:09:30.583 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:09:30.583 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:09:30.583 success 244, unsuccessful 124, failed 0 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.583 rmmod nvme_tcp 00:09:30.583 rmmod nvme_fabrics 00:09:30.583 rmmod nvme_keyring 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65129 ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65129 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65129 ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65129 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65129 00:09:30.583 killing process with pid 65129 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65129' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65129 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65129 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:30.583 00:09:30.583 real 0m24.411s 00:09:30.583 user 0m39.862s 00:09:30.583 sys 0m6.778s 00:09:30.583 ************************************ 00:09:30.583 END TEST nvmf_zcopy 00:09:30.583 ************************************ 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.583 ************************************ 00:09:30.583 START TEST nvmf_nmic 00:09:30.583 ************************************ 00:09:30.583 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:30.842 * Looking for test storage... 00:09:30.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.842 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.842 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.842 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.842 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.842 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.843 --rc genhtml_branch_coverage=1 00:09:30.843 --rc genhtml_function_coverage=1 00:09:30.843 --rc genhtml_legend=1 00:09:30.843 --rc geninfo_all_blocks=1 00:09:30.843 --rc geninfo_unexecuted_blocks=1 00:09:30.843 00:09:30.843 ' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.843 --rc genhtml_branch_coverage=1 00:09:30.843 --rc genhtml_function_coverage=1 00:09:30.843 --rc genhtml_legend=1 00:09:30.843 --rc geninfo_all_blocks=1 00:09:30.843 --rc geninfo_unexecuted_blocks=1 00:09:30.843 00:09:30.843 ' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.843 --rc genhtml_branch_coverage=1 00:09:30.843 --rc genhtml_function_coverage=1 00:09:30.843 --rc genhtml_legend=1 00:09:30.843 --rc geninfo_all_blocks=1 00:09:30.843 --rc geninfo_unexecuted_blocks=1 00:09:30.843 00:09:30.843 ' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.843 --rc genhtml_branch_coverage=1 00:09:30.843 --rc genhtml_function_coverage=1 00:09:30.843 --rc genhtml_legend=1 00:09:30.843 --rc geninfo_all_blocks=1 00:09:30.843 --rc geninfo_unexecuted_blocks=1 00:09:30.843 00:09:30.843 ' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.843 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.844 Cannot find device "nvmf_init_br" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.844 Cannot find device "nvmf_init_br2" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.844 Cannot find device "nvmf_tgt_br" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.844 Cannot find device "nvmf_tgt_br2" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.844 Cannot find device "nvmf_init_br" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.844 Cannot find device "nvmf_init_br2" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.844 Cannot find device "nvmf_tgt_br" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.844 Cannot find device "nvmf_tgt_br2" 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:30.844 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.103 Cannot find device "nvmf_br" 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.103 Cannot find device "nvmf_init_if" 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.103 Cannot find device "nvmf_init_if2" 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.103 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:31.363 00:09:31.363 --- 10.0.0.3 ping statistics --- 00:09:31.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.363 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.363 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.363 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:31.363 00:09:31.363 --- 10.0.0.4 ping statistics --- 00:09:31.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.363 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:31.363 00:09:31.363 --- 10.0.0.1 ping statistics --- 00:09:31.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.363 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:31.363 00:09:31.363 --- 10.0.0.2 ping statistics --- 00:09:31.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.363 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:31.363 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65657 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65657 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65657 ']' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.364 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.364 [2024-11-26 19:17:29.679611] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:31.364 [2024-11-26 19:17:29.679736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.623 [2024-11-26 19:17:29.837381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.623 [2024-11-26 19:17:29.916540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.623 [2024-11-26 19:17:29.917146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.623 [2024-11-26 19:17:29.917256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.623 [2024-11-26 19:17:29.917354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.623 [2024-11-26 19:17:29.917458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.623 [2024-11-26 19:17:29.919068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.623 [2024-11-26 19:17:29.919136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.623 [2024-11-26 19:17:29.919248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.623 [2024-11-26 19:17:29.919255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.623 [2024-11-26 19:17:29.985957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 [2024-11-26 19:17:30.744514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 Malloc0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 [2024-11-26 19:17:30.823343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.559 test case1: single bdev can't be used in multiple subsystems 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.559 [2024-11-26 19:17:30.847159] bdev.c:8259:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:32.559 [2024-11-26 19:17:30.847201] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:32.559 [2024-11-26 19:17:30.847219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.559 request: 00:09:32.559 { 00:09:32.559 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:32.559 "namespace": { 00:09:32.559 "bdev_name": "Malloc0", 00:09:32.559 "no_auto_visible": false 00:09:32.559 }, 00:09:32.559 "method": "nvmf_subsystem_add_ns", 00:09:32.559 "req_id": 1 00:09:32.559 } 00:09:32.559 Got JSON-RPC error response 00:09:32.559 response: 00:09:32.559 { 00:09:32.559 "code": -32602, 00:09:32.559 "message": "Invalid parameters" 00:09:32.559 } 00:09:32.559 Adding namespace failed - expected result. 00:09:32.559 test case2: host connect to nvmf target in multiple paths 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:32.559 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.560 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.560 [2024-11-26 19:17:30.859296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:32.560 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.560 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:32.818 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:34.721 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:34.721 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:34.721 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.980 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:34.980 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.980 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:34.980 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:34.980 [global] 00:09:34.980 thread=1 00:09:34.980 invalidate=1 00:09:34.980 rw=write 00:09:34.980 time_based=1 00:09:34.980 runtime=1 00:09:34.980 ioengine=libaio 00:09:34.980 direct=1 00:09:34.980 bs=4096 00:09:34.980 iodepth=1 00:09:34.980 norandommap=0 00:09:34.980 numjobs=1 00:09:34.980 00:09:34.980 verify_dump=1 00:09:34.980 verify_backlog=512 00:09:34.980 verify_state_save=0 00:09:34.980 do_verify=1 00:09:34.980 verify=crc32c-intel 00:09:34.980 [job0] 00:09:34.980 filename=/dev/nvme0n1 00:09:34.980 Could not set queue depth (nvme0n1) 00:09:34.980 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.980 fio-3.35 00:09:34.980 Starting 1 thread 00:09:36.358 00:09:36.358 job0: (groupid=0, jobs=1): err= 0: pid=65743: Tue Nov 26 19:17:34 2024 00:09:36.358 read: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:09:36.358 slat (nsec): min=12351, max=63145, avg=15635.54, stdev=5292.09 00:09:36.358 clat (usec): min=137, max=861, avg=193.45, stdev=40.04 00:09:36.358 lat (usec): min=151, max=877, avg=209.09, stdev=41.18 00:09:36.358 clat percentiles (usec): 00:09:36.358 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:09:36.358 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:09:36.358 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 260], 00:09:36.358 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 783], 99.95th=[ 791], 00:09:36.358 | 99.99th=[ 865] 00:09:36.358 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:36.358 slat (usec): min=15, max=133, avg=22.26, stdev= 7.69 00:09:36.358 clat (usec): min=83, max=248, avg=116.07, stdev=21.49 00:09:36.358 lat (usec): min=101, max=381, avg=138.33, stdev=24.21 00:09:36.358 clat percentiles (usec): 00:09:36.358 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 99], 00:09:36.358 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 117], 00:09:36.358 | 70.00th=[ 123], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 159], 00:09:36.358 | 99.00th=[ 186], 99.50th=[ 200], 99.90th=[ 223], 99.95th=[ 245], 00:09:36.358 | 99.99th=[ 249] 00:09:36.358 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:36.358 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:36.358 lat (usec) : 100=12.42%, 250=84.16%, 500=3.38%, 1000=0.05% 00:09:36.358 cpu : usr=2.50%, sys=8.70%, ctx=5775, majf=0, minf=5 00:09:36.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.358 issued rwts: total=2703,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.358 00:09:36.358 Run status group 0 (all jobs): 00:09:36.358 READ: bw=10.5MiB/s (11.1MB/s), 10.5MiB/s-10.5MiB/s (11.1MB/s-11.1MB/s), io=10.6MiB (11.1MB), run=1001-1001msec 00:09:36.358 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:36.358 00:09:36.358 Disk stats (read/write): 00:09:36.358 nvme0n1: ios=2583/2560, merge=0/0, ticks=522/320, in_queue=842, util=91.17% 00:09:36.358 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.359 rmmod nvme_tcp 00:09:36.359 rmmod nvme_fabrics 00:09:36.359 rmmod nvme_keyring 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65657 ']' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65657 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65657 ']' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65657 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65657 00:09:36.359 killing process with pid 65657 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65657' 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65657 00:09:36.359 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65657 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:36.618 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:36.618 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:36.878 ************************************ 00:09:36.878 END TEST nvmf_nmic 00:09:36.878 ************************************ 00:09:36.878 00:09:36.878 real 0m6.288s 00:09:36.878 user 0m18.926s 00:09:36.878 sys 0m2.476s 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 ************************************ 00:09:36.878 START TEST nvmf_fio_target 00:09:36.878 ************************************ 00:09:36.878 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:37.139 * Looking for test storage... 00:09:37.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.139 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.139 --rc genhtml_branch_coverage=1 00:09:37.139 --rc genhtml_function_coverage=1 00:09:37.140 --rc genhtml_legend=1 00:09:37.140 --rc geninfo_all_blocks=1 00:09:37.140 --rc geninfo_unexecuted_blocks=1 00:09:37.140 00:09:37.140 ' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.140 --rc genhtml_branch_coverage=1 00:09:37.140 --rc genhtml_function_coverage=1 00:09:37.140 --rc genhtml_legend=1 00:09:37.140 --rc geninfo_all_blocks=1 00:09:37.140 --rc geninfo_unexecuted_blocks=1 00:09:37.140 00:09:37.140 ' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.140 --rc genhtml_branch_coverage=1 00:09:37.140 --rc genhtml_function_coverage=1 00:09:37.140 --rc genhtml_legend=1 00:09:37.140 --rc geninfo_all_blocks=1 00:09:37.140 --rc geninfo_unexecuted_blocks=1 00:09:37.140 00:09:37.140 ' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.140 --rc genhtml_branch_coverage=1 00:09:37.140 --rc genhtml_function_coverage=1 00:09:37.140 --rc genhtml_legend=1 00:09:37.140 --rc geninfo_all_blocks=1 00:09:37.140 --rc geninfo_unexecuted_blocks=1 00:09:37.140 00:09:37.140 ' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.140 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:37.141 Cannot find device "nvmf_init_br" 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:37.141 Cannot find device "nvmf_init_br2" 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:37.141 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:37.401 Cannot find device "nvmf_tgt_br" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.401 Cannot find device "nvmf_tgt_br2" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:37.401 Cannot find device "nvmf_init_br" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:37.401 Cannot find device "nvmf_init_br2" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:37.401 Cannot find device "nvmf_tgt_br" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:37.401 Cannot find device "nvmf_tgt_br2" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.401 Cannot find device "nvmf_br" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.401 Cannot find device "nvmf_init_if" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.401 Cannot find device "nvmf_init_if2" 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:37.401 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:37.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:37.662 00:09:37.662 --- 10.0.0.3 ping statistics --- 00:09:37.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.662 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:37.662 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:37.662 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:09:37.662 00:09:37.662 --- 10.0.0.4 ping statistics --- 00:09:37.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.662 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:37.662 00:09:37.662 --- 10.0.0.1 ping statistics --- 00:09:37.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.662 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:37.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:09:37.662 00:09:37.662 --- 10.0.0.2 ping statistics --- 00:09:37.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.662 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.662 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65983 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65983 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65983 ']' 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.663 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.663 [2024-11-26 19:17:36.050026] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:37.663 [2024-11-26 19:17:36.050124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.922 [2024-11-26 19:17:36.197211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.922 [2024-11-26 19:17:36.253073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.922 [2024-11-26 19:17:36.253161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.922 [2024-11-26 19:17:36.253173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.922 [2024-11-26 19:17:36.253181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.922 [2024-11-26 19:17:36.253187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.922 [2024-11-26 19:17:36.254527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.922 [2024-11-26 19:17:36.255093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.922 [2024-11-26 19:17:36.255318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.922 [2024-11-26 19:17:36.255271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.922 [2024-11-26 19:17:36.317077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.183 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.444 [2024-11-26 19:17:36.729763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.444 19:17:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.703 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:38.703 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.271 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:39.271 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.271 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:39.531 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.790 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:39.790 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:39.790 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.358 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:40.358 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.617 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:40.617 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.875 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:40.875 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:41.134 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.392 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.392 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.650 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.650 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.911 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.171 [2024-11-26 19:17:40.391606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.171 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:42.430 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:42.689 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:42.689 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.610 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.610 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.610 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.869 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:44.869 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.869 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:44.869 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:44.869 [global] 00:09:44.869 thread=1 00:09:44.869 invalidate=1 00:09:44.869 rw=write 00:09:44.869 time_based=1 00:09:44.869 runtime=1 00:09:44.869 ioengine=libaio 00:09:44.869 direct=1 00:09:44.869 bs=4096 00:09:44.869 iodepth=1 00:09:44.869 norandommap=0 00:09:44.869 numjobs=1 00:09:44.869 00:09:44.869 verify_dump=1 00:09:44.869 verify_backlog=512 00:09:44.869 verify_state_save=0 00:09:44.869 do_verify=1 00:09:44.869 verify=crc32c-intel 00:09:44.869 [job0] 00:09:44.869 filename=/dev/nvme0n1 00:09:44.869 [job1] 00:09:44.869 filename=/dev/nvme0n2 00:09:44.869 [job2] 00:09:44.869 filename=/dev/nvme0n3 00:09:44.869 [job3] 00:09:44.869 filename=/dev/nvme0n4 00:09:44.869 Could not set queue depth (nvme0n1) 00:09:44.869 Could not set queue depth (nvme0n2) 00:09:44.869 Could not set queue depth (nvme0n3) 00:09:44.869 Could not set queue depth (nvme0n4) 00:09:44.869 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.869 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.869 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.869 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.869 fio-3.35 00:09:44.869 Starting 4 threads 00:09:46.245 00:09:46.245 job0: (groupid=0, jobs=1): err= 0: pid=66160: Tue Nov 26 19:17:44 2024 00:09:46.245 read: IOPS=1091, BW=4368KiB/s (4472kB/s)(4372KiB/1001msec) 00:09:46.245 slat (usec): min=16, max=104, avg=31.84, stdev=12.19 00:09:46.245 clat (usec): min=208, max=1593, avg=436.94, stdev=119.57 00:09:46.245 lat (usec): min=236, max=1623, avg=468.78, stdev=124.12 00:09:46.245 clat percentiles (usec): 00:09:46.245 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 343], 20.00th=[ 363], 00:09:46.245 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 420], 00:09:46.245 | 70.00th=[ 437], 80.00th=[ 465], 90.00th=[ 627], 95.00th=[ 725], 00:09:46.245 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 988], 99.95th=[ 1598], 00:09:46.245 | 99.99th=[ 1598] 00:09:46.245 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:46.245 slat (usec): min=23, max=211, avg=36.56, stdev= 9.52 00:09:46.245 clat (usec): min=120, max=590, avg=274.89, stdev=73.26 00:09:46.245 lat (usec): min=152, max=801, avg=311.45, stdev=75.17 00:09:46.245 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 143], 5.00th=[ 161], 10.00th=[ 178], 20.00th=[ 204], 00:09:46.246 | 30.00th=[ 235], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 289], 00:09:46.246 | 70.00th=[ 306], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 400], 00:09:46.246 | 99.00th=[ 441], 99.50th=[ 449], 99.90th=[ 529], 99.95th=[ 594], 00:09:46.246 | 99.99th=[ 594] 00:09:46.246 bw ( KiB/s): min= 7056, max= 7056, per=25.16%, avg=7056.00, stdev= 0.00, samples=1 00:09:46.246 iops : min= 1764, max= 1764, avg=1764.00, stdev= 0.00, samples=1 00:09:46.246 lat (usec) : 250=21.68%, 500=72.00%, 750=4.87%, 1000=1.41% 00:09:46.246 lat (msec) : 2=0.04% 00:09:46.246 cpu : usr=2.40%, sys=7.00%, ctx=2631, majf=0, minf=5 00:09:46.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 issued rwts: total=1093,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.246 job1: (groupid=0, jobs=1): err= 0: pid=66161: Tue Nov 26 19:17:44 2024 00:09:46.246 read: IOPS=1837, BW=7349KiB/s (7525kB/s)(7356KiB/1001msec) 00:09:46.246 slat (nsec): min=13285, max=53030, avg=17694.84, stdev=4798.28 00:09:46.246 clat (usec): min=180, max=417, avg=259.36, stdev=36.99 00:09:46.246 lat (usec): min=194, max=434, avg=277.06, stdev=37.03 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 227], 00:09:46.246 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 269], 00:09:46.246 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 322], 00:09:46.246 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 416], 00:09:46.246 | 99.99th=[ 416] 00:09:46.246 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:46.246 slat (nsec): min=18350, max=83968, avg=27518.66, stdev=7917.61 00:09:46.246 clat (usec): min=118, max=1601, avg=208.36, stdev=46.70 00:09:46.246 lat (usec): min=140, max=1629, avg=235.88, stdev=47.65 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 00:09:46.246 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 215], 00:09:46.246 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 258], 95.00th=[ 273], 00:09:46.246 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 367], 00:09:46.246 | 99.99th=[ 1598] 00:09:46.246 bw ( KiB/s): min= 8208, max= 8208, per=29.27%, avg=8208.00, stdev= 0.00, samples=1 00:09:46.246 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:09:46.246 lat (usec) : 250=66.89%, 500=33.08% 00:09:46.246 lat (msec) : 2=0.03% 00:09:46.246 cpu : usr=1.80%, sys=6.80%, ctx=3887, majf=0, minf=11 00:09:46.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 issued rwts: total=1839,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.246 job2: (groupid=0, jobs=1): err= 0: pid=66162: Tue Nov 26 19:17:44 2024 00:09:46.246 read: IOPS=1971, BW=7884KiB/s (8073kB/s)(7892KiB/1001msec) 00:09:46.246 slat (nsec): min=11814, max=67940, avg=14460.55, stdev=3532.47 00:09:46.246 clat (usec): min=173, max=2729, avg=261.77, stdev=67.25 00:09:46.246 lat (usec): min=188, max=2797, avg=276.23, stdev=68.27 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 231], 00:09:46.246 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:09:46.246 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:09:46.246 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 898], 99.95th=[ 2737], 00:09:46.246 | 99.99th=[ 2737] 00:09:46.246 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:46.246 slat (usec): min=15, max=124, avg=21.79, stdev= 5.57 00:09:46.246 clat (usec): min=121, max=340, avg=197.14, stdev=34.08 00:09:46.246 lat (usec): min=144, max=404, avg=218.93, stdev=34.75 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 169], 00:09:46.246 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 202], 00:09:46.246 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 262], 00:09:46.246 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 338], 00:09:46.246 | 99.99th=[ 343] 00:09:46.246 bw ( KiB/s): min= 8192, max= 8192, per=29.22%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.246 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.246 lat (usec) : 250=67.10%, 500=32.83%, 750=0.02%, 1000=0.02% 00:09:46.246 lat (msec) : 4=0.02% 00:09:46.246 cpu : usr=1.50%, sys=5.80%, ctx=4022, majf=0, minf=9 00:09:46.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 issued rwts: total=1973,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.246 job3: (groupid=0, jobs=1): err= 0: pid=66163: Tue Nov 26 19:17:44 2024 00:09:46.246 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:46.246 slat (usec): min=15, max=136, avg=31.80, stdev=12.24 00:09:46.246 clat (usec): min=231, max=1074, avg=437.57, stdev=102.66 00:09:46.246 lat (usec): min=257, max=1100, avg=469.37, stdev=108.79 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 302], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 371], 00:09:46.246 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 424], 00:09:46.246 | 70.00th=[ 445], 80.00th=[ 478], 90.00th=[ 603], 95.00th=[ 668], 00:09:46.246 | 99.00th=[ 766], 99.50th=[ 824], 99.90th=[ 971], 99.95th=[ 1074], 00:09:46.246 | 99.99th=[ 1074] 00:09:46.246 write: IOPS=1383, BW=5534KiB/s (5667kB/s)(5540KiB/1001msec); 0 zone resets 00:09:46.246 slat (usec): min=23, max=185, avg=40.36, stdev=12.57 00:09:46.246 clat (usec): min=136, max=946, avg=328.13, stdev=107.75 00:09:46.246 lat (usec): min=167, max=994, avg=368.49, stdev=113.65 00:09:46.246 clat percentiles (usec): 00:09:46.246 | 1.00th=[ 153], 5.00th=[ 176], 10.00th=[ 198], 20.00th=[ 243], 00:09:46.246 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 306], 60.00th=[ 338], 00:09:46.246 | 70.00th=[ 379], 80.00th=[ 424], 90.00th=[ 486], 95.00th=[ 529], 00:09:46.246 | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 799], 99.95th=[ 947], 00:09:46.246 | 99.99th=[ 947] 00:09:46.246 bw ( KiB/s): min= 5224, max= 5224, per=18.63%, avg=5224.00, stdev= 0.00, samples=1 00:09:46.246 iops : min= 1306, max= 1306, avg=1306.00, stdev= 0.00, samples=1 00:09:46.246 lat (usec) : 250=13.66%, 500=73.97%, 750=11.71%, 1000=0.62% 00:09:46.246 lat (msec) : 2=0.04% 00:09:46.246 cpu : usr=2.10%, sys=7.00%, ctx=2412, majf=0, minf=13 00:09:46.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.246 issued rwts: total=1024,1385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.246 00:09:46.246 Run status group 0 (all jobs): 00:09:46.246 READ: bw=23.1MiB/s (24.3MB/s), 4092KiB/s-7884KiB/s (4190kB/s-8073kB/s), io=23.2MiB (24.3MB), run=1001-1001msec 00:09:46.246 WRITE: bw=27.4MiB/s (28.7MB/s), 5534KiB/s-8184KiB/s (5667kB/s-8380kB/s), io=27.4MiB (28.7MB), run=1001-1001msec 00:09:46.246 00:09:46.246 Disk stats (read/write): 00:09:46.246 nvme0n1: ios=1073/1179, merge=0/0, ticks=492/340, in_queue=832, util=86.45% 00:09:46.246 nvme0n2: ios=1566/1732, merge=0/0, ticks=445/388, in_queue=833, util=86.86% 00:09:46.246 nvme0n3: ios=1536/1877, merge=0/0, ticks=416/397, in_queue=813, util=88.72% 00:09:46.246 nvme0n4: ios=982/1024, merge=0/0, ticks=438/354, in_queue=792, util=89.48% 00:09:46.246 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:46.246 [global] 00:09:46.246 thread=1 00:09:46.246 invalidate=1 00:09:46.246 rw=randwrite 00:09:46.246 time_based=1 00:09:46.246 runtime=1 00:09:46.246 ioengine=libaio 00:09:46.246 direct=1 00:09:46.246 bs=4096 00:09:46.246 iodepth=1 00:09:46.246 norandommap=0 00:09:46.246 numjobs=1 00:09:46.246 00:09:46.246 verify_dump=1 00:09:46.246 verify_backlog=512 00:09:46.246 verify_state_save=0 00:09:46.246 do_verify=1 00:09:46.246 verify=crc32c-intel 00:09:46.246 [job0] 00:09:46.246 filename=/dev/nvme0n1 00:09:46.246 [job1] 00:09:46.246 filename=/dev/nvme0n2 00:09:46.246 [job2] 00:09:46.246 filename=/dev/nvme0n3 00:09:46.246 [job3] 00:09:46.246 filename=/dev/nvme0n4 00:09:46.246 Could not set queue depth (nvme0n1) 00:09:46.246 Could not set queue depth (nvme0n2) 00:09:46.246 Could not set queue depth (nvme0n3) 00:09:46.246 Could not set queue depth (nvme0n4) 00:09:46.246 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.246 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.246 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.246 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.246 fio-3.35 00:09:46.246 Starting 4 threads 00:09:47.632 00:09:47.632 job0: (groupid=0, jobs=1): err= 0: pid=66221: Tue Nov 26 19:17:45 2024 00:09:47.632 read: IOPS=2394, BW=9578KiB/s (9808kB/s)(9588KiB/1001msec) 00:09:47.632 slat (usec): min=10, max=113, avg=14.86, stdev= 6.18 00:09:47.632 clat (usec): min=136, max=547, avg=200.84, stdev=61.40 00:09:47.632 lat (usec): min=147, max=564, avg=215.70, stdev=64.69 00:09:47.632 clat percentiles (usec): 00:09:47.632 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:47.632 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:09:47.632 | 70.00th=[ 212], 80.00th=[ 255], 90.00th=[ 302], 95.00th=[ 330], 00:09:47.632 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 433], 99.95th=[ 449], 00:09:47.632 | 99.99th=[ 545] 00:09:47.632 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:47.632 slat (usec): min=13, max=104, avg=25.35, stdev=10.74 00:09:47.632 clat (usec): min=92, max=3278, avg=159.76, stdev=88.22 00:09:47.632 lat (usec): min=109, max=3352, avg=185.11, stdev=93.14 00:09:47.632 clat percentiles (usec): 00:09:47.632 | 1.00th=[ 98], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 115], 00:09:47.632 | 30.00th=[ 121], 40.00th=[ 127], 50.00th=[ 137], 60.00th=[ 157], 00:09:47.632 | 70.00th=[ 180], 80.00th=[ 204], 90.00th=[ 235], 95.00th=[ 262], 00:09:47.632 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 1106], 99.95th=[ 1565], 00:09:47.632 | 99.99th=[ 3294] 00:09:47.632 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:47.632 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:47.632 lat (usec) : 100=0.91%, 250=85.54%, 500=13.46%, 750=0.02%, 1000=0.02% 00:09:47.632 lat (msec) : 2=0.04%, 4=0.02% 00:09:47.632 cpu : usr=1.90%, sys=8.10%, ctx=4957, majf=0, minf=9 00:09:47.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.632 issued rwts: total=2397,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.632 job1: (groupid=0, jobs=1): err= 0: pid=66222: Tue Nov 26 19:17:45 2024 00:09:47.632 read: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec) 00:09:47.632 slat (nsec): min=7945, max=58341, avg=14198.20, stdev=4904.44 00:09:47.632 clat (usec): min=132, max=445, avg=213.97, stdev=56.67 00:09:47.632 lat (usec): min=144, max=457, avg=228.17, stdev=57.76 00:09:47.632 clat percentiles (usec): 00:09:47.632 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:09:47.632 | 30.00th=[ 163], 40.00th=[ 178], 50.00th=[ 217], 60.00th=[ 235], 00:09:47.632 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 314], 00:09:47.632 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 408], 99.95th=[ 420], 00:09:47.632 | 99.99th=[ 445] 00:09:47.632 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:47.632 slat (usec): min=10, max=104, avg=21.84, stdev= 9.47 00:09:47.632 clat (usec): min=84, max=851, avg=151.41, stdev=44.02 00:09:47.632 lat (usec): min=104, max=934, avg=173.25, stdev=48.24 00:09:47.632 clat percentiles (usec): 00:09:47.632 | 1.00th=[ 93], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 110], 00:09:47.632 | 30.00th=[ 118], 40.00th=[ 133], 50.00th=[ 149], 60.00th=[ 163], 00:09:47.632 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 206], 95.00th=[ 225], 00:09:47.632 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 330], 00:09:47.632 | 99.99th=[ 848] 00:09:47.632 bw ( KiB/s): min= 8744, max= 8744, per=25.28%, avg=8744.00, stdev= 0.00, samples=1 00:09:47.632 iops : min= 2186, max= 2186, avg=2186.00, stdev= 0.00, samples=1 00:09:47.632 lat (usec) : 100=3.51%, 250=81.58%, 500=14.89%, 1000=0.02% 00:09:47.632 cpu : usr=1.90%, sys=7.30%, ctx=4963, majf=0, minf=17 00:09:47.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.632 issued rwts: total=2403,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.632 job2: (groupid=0, jobs=1): err= 0: pid=66224: Tue Nov 26 19:17:45 2024 00:09:47.632 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:47.632 slat (usec): min=14, max=101, avg=22.09, stdev=10.13 00:09:47.632 clat (usec): min=152, max=873, avg=295.53, stdev=109.29 00:09:47.633 lat (usec): min=167, max=906, avg=317.62, stdev=115.30 00:09:47.633 clat percentiles (usec): 00:09:47.633 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 186], 00:09:47.633 | 30.00th=[ 204], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 293], 00:09:47.633 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 453], 95.00th=[ 515], 00:09:47.633 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 824], 99.95th=[ 873], 00:09:47.633 | 99.99th=[ 873] 00:09:47.633 write: IOPS=1999, BW=7996KiB/s (8188kB/s)(8004KiB/1001msec); 0 zone resets 00:09:47.633 slat (nsec): min=16414, max=96687, avg=29946.11, stdev=9996.61 00:09:47.633 clat (usec): min=105, max=2268, avg=221.53, stdev=98.26 00:09:47.633 lat (usec): min=126, max=2294, avg=251.48, stdev=103.27 00:09:47.633 clat percentiles (usec): 00:09:47.633 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 135], 00:09:47.633 | 30.00th=[ 147], 40.00th=[ 194], 50.00th=[ 215], 60.00th=[ 229], 00:09:47.633 | 70.00th=[ 255], 80.00th=[ 289], 90.00th=[ 351], 95.00th=[ 396], 00:09:47.633 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[ 611], 99.95th=[ 644], 00:09:47.633 | 99.99th=[ 2278] 00:09:47.633 bw ( KiB/s): min= 6376, max= 6376, per=18.43%, avg=6376.00, stdev= 0.00, samples=1 00:09:47.633 iops : min= 1594, max= 1594, avg=1594.00, stdev= 0.00, samples=1 00:09:47.633 lat (usec) : 250=53.92%, 500=43.34%, 750=2.66%, 1000=0.06% 00:09:47.633 lat (msec) : 4=0.03% 00:09:47.633 cpu : usr=2.10%, sys=7.40%, ctx=3537, majf=0, minf=10 00:09:47.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.633 issued rwts: total=1536,2001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.633 job3: (groupid=0, jobs=1): err= 0: pid=66225: Tue Nov 26 19:17:45 2024 00:09:47.633 read: IOPS=1533, BW=6134KiB/s (6281kB/s)(6140KiB/1001msec) 00:09:47.633 slat (nsec): min=7965, max=80574, avg=18441.36, stdev=8598.73 00:09:47.633 clat (usec): min=158, max=1423, avg=330.12, stdev=94.68 00:09:47.633 lat (usec): min=175, max=1461, avg=348.56, stdev=100.68 00:09:47.633 clat percentiles (usec): 00:09:47.633 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 260], 00:09:47.633 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 330], 00:09:47.633 | 70.00th=[ 359], 80.00th=[ 392], 90.00th=[ 457], 95.00th=[ 523], 00:09:47.633 | 99.00th=[ 619], 99.50th=[ 668], 99.90th=[ 865], 99.95th=[ 1418], 00:09:47.633 | 99.99th=[ 1418] 00:09:47.633 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:47.633 slat (usec): min=11, max=113, avg=30.51, stdev=11.96 00:09:47.633 clat (usec): min=106, max=7704, avg=267.02, stdev=230.99 00:09:47.633 lat (usec): min=131, max=7725, avg=297.53, stdev=233.95 00:09:47.633 clat percentiles (usec): 00:09:47.633 | 1.00th=[ 124], 5.00th=[ 145], 10.00th=[ 172], 20.00th=[ 194], 00:09:47.633 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 255], 00:09:47.633 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 383], 95.00th=[ 429], 00:09:47.633 | 99.00th=[ 603], 99.50th=[ 725], 99.90th=[ 3556], 99.95th=[ 7701], 00:09:47.633 | 99.99th=[ 7701] 00:09:47.633 bw ( KiB/s): min= 5864, max= 5864, per=16.95%, avg=5864.00, stdev= 0.00, samples=1 00:09:47.633 iops : min= 1466, max= 1466, avg=1466.00, stdev= 0.00, samples=1 00:09:47.633 lat (usec) : 250=35.10%, 500=60.27%, 750=4.33%, 1000=0.10% 00:09:47.633 lat (msec) : 2=0.13%, 4=0.03%, 10=0.03% 00:09:47.633 cpu : usr=1.90%, sys=6.20%, ctx=3075, majf=0, minf=11 00:09:47.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.633 issued rwts: total=1535,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.633 00:09:47.633 Run status group 0 (all jobs): 00:09:47.633 READ: bw=30.7MiB/s (32.2MB/s), 6134KiB/s-9602KiB/s (6281kB/s-9833kB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:09:47.633 WRITE: bw=33.8MiB/s (35.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=33.8MiB (35.5MB), run=1001-1001msec 00:09:47.633 00:09:47.633 Disk stats (read/write): 00:09:47.633 nvme0n1: ios=2003/2048, merge=0/0, ticks=451/364, in_queue=815, util=86.97% 00:09:47.633 nvme0n2: ios=2091/2246, merge=0/0, ticks=451/350, in_queue=801, util=87.54% 00:09:47.633 nvme0n3: ios=1157/1536, merge=0/0, ticks=389/399, in_queue=788, util=89.11% 00:09:47.633 nvme0n4: ios=1024/1533, merge=0/0, ticks=367/415, in_queue=782, util=89.04% 00:09:47.633 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:47.633 [global] 00:09:47.633 thread=1 00:09:47.633 invalidate=1 00:09:47.633 rw=write 00:09:47.633 time_based=1 00:09:47.633 runtime=1 00:09:47.633 ioengine=libaio 00:09:47.633 direct=1 00:09:47.633 bs=4096 00:09:47.633 iodepth=128 00:09:47.633 norandommap=0 00:09:47.633 numjobs=1 00:09:47.633 00:09:47.633 verify_dump=1 00:09:47.633 verify_backlog=512 00:09:47.633 verify_state_save=0 00:09:47.633 do_verify=1 00:09:47.633 verify=crc32c-intel 00:09:47.633 [job0] 00:09:47.633 filename=/dev/nvme0n1 00:09:47.633 [job1] 00:09:47.633 filename=/dev/nvme0n2 00:09:47.633 [job2] 00:09:47.633 filename=/dev/nvme0n3 00:09:47.633 [job3] 00:09:47.633 filename=/dev/nvme0n4 00:09:47.633 Could not set queue depth (nvme0n1) 00:09:47.633 Could not set queue depth (nvme0n2) 00:09:47.633 Could not set queue depth (nvme0n3) 00:09:47.633 Could not set queue depth (nvme0n4) 00:09:47.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.633 fio-3.35 00:09:47.633 Starting 4 threads 00:09:49.012 00:09:49.012 job0: (groupid=0, jobs=1): err= 0: pid=66286: Tue Nov 26 19:17:47 2024 00:09:49.012 read: IOPS=5201, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1003msec) 00:09:49.012 slat (usec): min=3, max=4492, avg=89.51, stdev=425.11 00:09:49.012 clat (usec): min=371, max=13808, avg=11725.12, stdev=1016.74 00:09:49.012 lat (usec): min=3357, max=14097, avg=11814.62, stdev=926.88 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 6915], 5.00th=[10552], 10.00th=[11207], 20.00th=[11469], 00:09:49.012 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:09:49.012 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:09:49.012 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:09:49.012 | 99.99th=[13829] 00:09:49.012 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:49.012 slat (usec): min=9, max=2751, avg=88.48, stdev=374.37 00:09:49.012 clat (usec): min=8629, max=14333, avg=11642.75, stdev=746.97 00:09:49.012 lat (usec): min=8703, max=14350, avg=11731.23, stdev=648.22 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 9241], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:09:49.012 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:49.012 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:09:49.012 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14222], 99.95th=[14353], 00:09:49.012 | 99.99th=[14353] 00:09:49.012 bw ( KiB/s): min=21856, max=22914, per=34.40%, avg=22385.00, stdev=748.12, samples=2 00:09:49.012 iops : min= 5464, max= 5728, avg=5596.00, stdev=186.68, samples=2 00:09:49.012 lat (usec) : 500=0.01% 00:09:49.012 lat (msec) : 4=0.29%, 10=3.24%, 20=96.46% 00:09:49.012 cpu : usr=4.89%, sys=13.17%, ctx=346, majf=0, minf=13 00:09:49.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.012 issued rwts: total=5217,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.012 job1: (groupid=0, jobs=1): err= 0: pid=66287: Tue Nov 26 19:17:47 2024 00:09:49.012 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:49.012 slat (usec): min=7, max=6232, avg=88.14, stdev=546.18 00:09:49.012 clat (usec): min=7361, max=21063, avg=12484.04, stdev=1408.86 00:09:49.012 lat (usec): min=7371, max=25041, avg=12572.18, stdev=1437.44 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 8094], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 00:09:49.012 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:49.012 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13435], 95.00th=[13566], 00:09:49.012 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:09:49.012 | 99.99th=[21103] 00:09:49.012 write: IOPS=5550, BW=21.7MiB/s (22.7MB/s)(21.7MiB/1003msec); 0 zone resets 00:09:49.012 slat (usec): min=10, max=7133, avg=91.32, stdev=530.34 00:09:49.012 clat (usec): min=532, max=16271, avg=11336.40, stdev=1220.43 00:09:49.012 lat (usec): min=4944, max=16477, avg=11427.72, stdev=1125.99 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 6325], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:09:49.012 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:49.012 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:09:49.012 | 99.00th=[14484], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:09:49.012 | 99.99th=[16319] 00:09:49.012 bw ( KiB/s): min=21048, max=22464, per=33.43%, avg=21756.00, stdev=1001.26, samples=2 00:09:49.012 iops : min= 5262, max= 5616, avg=5439.00, stdev=250.32, samples=2 00:09:49.012 lat (usec) : 750=0.01% 00:09:49.012 lat (msec) : 10=4.73%, 20=94.96%, 50=0.30% 00:09:49.012 cpu : usr=6.19%, sys=12.97%, ctx=234, majf=0, minf=13 00:09:49.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.012 issued rwts: total=5120,5567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.012 job2: (groupid=0, jobs=1): err= 0: pid=66288: Tue Nov 26 19:17:47 2024 00:09:49.012 read: IOPS=2274, BW=9097KiB/s (9315kB/s)(9124KiB/1003msec) 00:09:49.012 slat (usec): min=6, max=8900, avg=205.01, stdev=971.46 00:09:49.012 clat (usec): min=2391, max=35654, avg=26253.93, stdev=3825.45 00:09:49.012 lat (usec): min=2405, max=36418, avg=26458.94, stdev=3723.12 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 4817], 5.00th=[20317], 10.00th=[25035], 20.00th=[25560], 00:09:49.012 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:09:49.012 | 70.00th=[27132], 80.00th=[27919], 90.00th=[29492], 95.00th=[30802], 00:09:49.012 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:49.012 | 99.99th=[35914] 00:09:49.012 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:49.012 slat (usec): min=11, max=9845, avg=199.46, stdev=976.57 00:09:49.012 clat (usec): min=17182, max=36245, avg=25877.21, stdev=2230.66 00:09:49.012 lat (usec): min=18771, max=36270, avg=26076.67, stdev=2043.89 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[20055], 5.00th=[23200], 10.00th=[23987], 20.00th=[24773], 00:09:49.012 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:09:49.012 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[30016], 00:09:49.012 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:09:49.012 | 99.99th=[36439] 00:09:49.012 bw ( KiB/s): min=10048, max=10432, per=15.73%, avg=10240.00, stdev=271.53, samples=2 00:09:49.012 iops : min= 2512, max= 2608, avg=2560.00, stdev=67.88, samples=2 00:09:49.012 lat (msec) : 4=0.19%, 10=0.66%, 20=1.76%, 50=97.40% 00:09:49.012 cpu : usr=2.59%, sys=7.49%, ctx=181, majf=0, minf=13 00:09:49.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.012 issued rwts: total=2281,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.012 job3: (groupid=0, jobs=1): err= 0: pid=66289: Tue Nov 26 19:17:47 2024 00:09:49.012 read: IOPS=2300, BW=9200KiB/s (9421kB/s)(9228KiB/1003msec) 00:09:49.012 slat (usec): min=6, max=10238, avg=211.98, stdev=1093.23 00:09:49.012 clat (usec): min=1755, max=34152, avg=26012.06, stdev=3392.00 00:09:49.012 lat (usec): min=1776, max=34170, avg=26224.04, stdev=3245.78 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[ 8586], 5.00th=[21103], 10.00th=[23462], 20.00th=[25297], 00:09:49.012 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:09:49.012 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[30802], 00:09:49.012 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:09:49.012 | 99.99th=[34341] 00:09:49.012 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:49.012 slat (usec): min=11, max=8881, avg=190.95, stdev=934.49 00:09:49.012 clat (usec): min=14242, max=33311, avg=25795.62, stdev=2559.97 00:09:49.012 lat (usec): min=18870, max=33324, avg=25986.57, stdev=2366.10 00:09:49.012 clat percentiles (usec): 00:09:49.012 | 1.00th=[19792], 5.00th=[20579], 10.00th=[22676], 20.00th=[24511], 00:09:49.012 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:09:49.012 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[30016], 00:09:49.012 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:09:49.012 | 99.99th=[33424] 00:09:49.012 bw ( KiB/s): min= 9464, max=11016, per=15.73%, avg=10240.00, stdev=1097.43, samples=2 00:09:49.012 iops : min= 2366, max= 2754, avg=2560.00, stdev=274.36, samples=2 00:09:49.012 lat (msec) : 2=0.06%, 10=0.66%, 20=1.77%, 50=97.51% 00:09:49.012 cpu : usr=2.89%, sys=7.49%, ctx=154, majf=0, minf=13 00:09:49.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.012 issued rwts: total=2307,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.012 00:09:49.013 Run status group 0 (all jobs): 00:09:49.013 READ: bw=58.1MiB/s (60.9MB/s), 9097KiB/s-20.3MiB/s (9315kB/s-21.3MB/s), io=58.3MiB (61.1MB), run=1003-1003msec 00:09:49.013 WRITE: bw=63.6MiB/s (66.6MB/s), 9.97MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=63.7MiB (66.8MB), run=1003-1003msec 00:09:49.013 00:09:49.013 Disk stats (read/write): 00:09:49.013 nvme0n1: ios=4658/4832, merge=0/0, ticks=12166/11579, in_queue=23745, util=88.47% 00:09:49.013 nvme0n2: ios=4649/4616, merge=0/0, ticks=53726/47921, in_queue=101647, util=89.30% 00:09:49.013 nvme0n3: ios=2054/2126, merge=0/0, ticks=13252/12546, in_queue=25798, util=88.95% 00:09:49.013 nvme0n4: ios=2048/2208, merge=0/0, ticks=13228/12493, in_queue=25721, util=89.51% 00:09:49.013 19:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:49.013 [global] 00:09:49.013 thread=1 00:09:49.013 invalidate=1 00:09:49.013 rw=randwrite 00:09:49.013 time_based=1 00:09:49.013 runtime=1 00:09:49.013 ioengine=libaio 00:09:49.013 direct=1 00:09:49.013 bs=4096 00:09:49.013 iodepth=128 00:09:49.013 norandommap=0 00:09:49.013 numjobs=1 00:09:49.013 00:09:49.013 verify_dump=1 00:09:49.013 verify_backlog=512 00:09:49.013 verify_state_save=0 00:09:49.013 do_verify=1 00:09:49.013 verify=crc32c-intel 00:09:49.013 [job0] 00:09:49.013 filename=/dev/nvme0n1 00:09:49.013 [job1] 00:09:49.013 filename=/dev/nvme0n2 00:09:49.013 [job2] 00:09:49.013 filename=/dev/nvme0n3 00:09:49.013 [job3] 00:09:49.013 filename=/dev/nvme0n4 00:09:49.013 Could not set queue depth (nvme0n1) 00:09:49.013 Could not set queue depth (nvme0n2) 00:09:49.013 Could not set queue depth (nvme0n3) 00:09:49.013 Could not set queue depth (nvme0n4) 00:09:49.013 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.013 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.013 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.013 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.013 fio-3.35 00:09:49.013 Starting 4 threads 00:09:50.388 00:09:50.388 job0: (groupid=0, jobs=1): err= 0: pid=66342: Tue Nov 26 19:17:48 2024 00:09:50.388 read: IOPS=2407, BW=9631KiB/s (9862kB/s)(9708KiB/1008msec) 00:09:50.388 slat (usec): min=8, max=19892, avg=210.72, stdev=1634.69 00:09:50.388 clat (usec): min=2891, max=44545, avg=27366.33, stdev=3134.97 00:09:50.388 lat (usec): min=19119, max=51583, avg=27577.05, stdev=3385.68 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[19530], 5.00th=[21890], 10.00th=[25297], 20.00th=[26084], 00:09:50.388 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:09:50.388 | 70.00th=[27657], 80.00th=[27919], 90.00th=[32900], 95.00th=[32900], 00:09:50.388 | 99.00th=[35914], 99.50th=[41681], 99.90th=[43779], 99.95th=[44303], 00:09:50.388 | 99.99th=[44303] 00:09:50.388 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:09:50.388 slat (usec): min=7, max=17577, avg=186.74, stdev=1309.62 00:09:50.388 clat (usec): min=10996, max=32985, avg=23959.63, stdev=3653.92 00:09:50.388 lat (usec): min=11020, max=33020, avg=24146.38, stdev=3459.55 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[11076], 5.00th=[18482], 10.00th=[19792], 20.00th=[22414], 00:09:50.388 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:09:50.388 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26084], 95.00th=[28705], 00:09:50.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:09:50.388 | 99.99th=[32900] 00:09:50.388 bw ( KiB/s): min= 9224, max=11256, per=17.38%, avg=10240.00, stdev=1436.84, samples=2 00:09:50.388 iops : min= 2306, max= 2814, avg=2560.00, stdev=359.21, samples=2 00:09:50.388 lat (msec) : 4=0.02%, 20=6.98%, 50=93.00% 00:09:50.388 cpu : usr=1.39%, sys=6.26%, ctx=111, majf=0, minf=17 00:09:50.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:50.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.388 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.388 job1: (groupid=0, jobs=1): err= 0: pid=66343: Tue Nov 26 19:17:48 2024 00:09:50.388 read: IOPS=5082, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:50.388 slat (usec): min=7, max=6241, avg=92.24, stdev=558.42 00:09:50.388 clat (usec): min=1621, max=21010, avg=12876.49, stdev=1534.30 00:09:50.388 lat (usec): min=5708, max=25040, avg=12968.72, stdev=1536.71 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[ 7504], 5.00th=[11076], 10.00th=[11994], 20.00th=[12387], 00:09:50.388 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:09:50.388 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:09:50.388 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:09:50.388 | 99.99th=[21103] 00:09:50.388 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:09:50.388 slat (usec): min=11, max=9058, avg=95.35, stdev=562.59 00:09:50.388 clat (usec): min=6061, max=19145, avg=12044.79, stdev=1306.12 00:09:50.388 lat (usec): min=8253, max=19189, avg=12140.14, stdev=1217.14 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[ 8160], 5.00th=[10552], 10.00th=[10683], 20.00th=[11076], 00:09:50.388 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:09:50.388 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13435], 95.00th=[14353], 00:09:50.388 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:09:50.388 | 99.99th=[19268] 00:09:50.388 bw ( KiB/s): min=20480, max=20521, per=34.79%, avg=20500.50, stdev=28.99, samples=2 00:09:50.388 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:50.388 lat (msec) : 2=0.01%, 10=3.60%, 20=96.00%, 50=0.39% 00:09:50.388 cpu : usr=4.98%, sys=14.63%, ctx=218, majf=0, minf=9 00:09:50.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.388 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.388 job2: (groupid=0, jobs=1): err= 0: pid=66344: Tue Nov 26 19:17:48 2024 00:09:50.388 read: IOPS=2407, BW=9631KiB/s (9862kB/s)(9708KiB/1008msec) 00:09:50.388 slat (usec): min=10, max=13974, avg=186.08, stdev=1197.22 00:09:50.388 clat (usec): min=2233, max=47290, avg=26071.06, stdev=4352.68 00:09:50.388 lat (usec): min=8239, max=53191, avg=26257.14, stdev=4319.62 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[ 8848], 5.00th=[18220], 10.00th=[20317], 20.00th=[25560], 00:09:50.388 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:09:50.388 | 70.00th=[27395], 80.00th=[27657], 90.00th=[28443], 95.00th=[29754], 00:09:50.388 | 99.00th=[43779], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:09:50.388 | 99.99th=[47449] 00:09:50.388 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:09:50.388 slat (usec): min=10, max=26273, avg=206.23, stdev=1371.45 00:09:50.388 clat (usec): min=12831, max=42362, avg=25140.91, stdev=3378.49 00:09:50.388 lat (usec): min=20039, max=42390, avg=25347.14, stdev=3168.16 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[14877], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:09:50.388 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:09:50.388 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26084], 95.00th=[29492], 00:09:50.388 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.388 | 99.99th=[42206] 00:09:50.388 bw ( KiB/s): min= 9736, max=10765, per=17.40%, avg=10250.50, stdev=727.61, samples=2 00:09:50.388 iops : min= 2434, max= 2691, avg=2562.50, stdev=181.73, samples=2 00:09:50.388 lat (msec) : 4=0.02%, 10=1.04%, 20=4.31%, 50=94.63% 00:09:50.388 cpu : usr=3.38%, sys=8.14%, ctx=146, majf=0, minf=13 00:09:50.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:50.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.388 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.388 job3: (groupid=0, jobs=1): err= 0: pid=66345: Tue Nov 26 19:17:48 2024 00:09:50.388 read: IOPS=4455, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1003msec) 00:09:50.388 slat (usec): min=7, max=10848, avg=104.61, stdev=606.10 00:09:50.388 clat (usec): min=1962, max=25770, avg=14548.47, stdev=1749.86 00:09:50.388 lat (usec): min=6282, max=27539, avg=14653.08, stdev=1788.63 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[ 7308], 5.00th=[11863], 10.00th=[13435], 20.00th=[13960], 00:09:50.388 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:09:50.388 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16319], 00:09:50.388 | 99.00th=[20055], 99.50th=[20317], 99.90th=[23462], 99.95th=[23462], 00:09:50.388 | 99.99th=[25822] 00:09:50.388 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:50.388 slat (usec): min=6, max=10417, avg=107.09, stdev=641.86 00:09:50.388 clat (usec): min=6382, max=19136, avg=13487.34, stdev=1404.89 00:09:50.388 lat (usec): min=6406, max=19161, avg=13594.43, stdev=1272.88 00:09:50.388 clat percentiles (usec): 00:09:50.388 | 1.00th=[ 7308], 5.00th=[11469], 10.00th=[12649], 20.00th=[13173], 00:09:50.388 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:09:50.389 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:09:50.389 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:09:50.389 | 99.99th=[19006] 00:09:50.389 bw ( KiB/s): min=17976, max=18925, per=31.31%, avg=18450.50, stdev=671.04, samples=2 00:09:50.389 iops : min= 4494, max= 4731, avg=4612.50, stdev=167.58, samples=2 00:09:50.389 lat (msec) : 2=0.01%, 10=3.36%, 20=96.19%, 50=0.44% 00:09:50.389 cpu : usr=4.19%, sys=13.97%, ctx=218, majf=0, minf=13 00:09:50.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:50.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.389 issued rwts: total=4469,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.389 00:09:50.389 Run status group 0 (all jobs): 00:09:50.389 READ: bw=55.9MiB/s (58.7MB/s), 9631KiB/s-19.9MiB/s (9862kB/s-20.8MB/s), io=56.4MiB (59.1MB), run=1003-1008msec 00:09:50.389 WRITE: bw=57.5MiB/s (60.3MB/s), 9.92MiB/s-19.9MiB/s (10.4MB/s-20.8MB/s), io=58.0MiB (60.8MB), run=1003-1008msec 00:09:50.389 00:09:50.389 Disk stats (read/write): 00:09:50.389 nvme0n1: ios=2098/2176, merge=0/0, ticks=54900/49727, in_queue=104627, util=87.98% 00:09:50.389 nvme0n2: ios=4144/4606, merge=0/0, ticks=50078/50845, in_queue=100923, util=88.78% 00:09:50.389 nvme0n3: ios=2040/2120, merge=0/0, ticks=52095/50711, in_queue=102806, util=89.06% 00:09:50.389 nvme0n4: ios=3646/4096, merge=0/0, ticks=50587/50891, in_queue=101478, util=89.71% 00:09:50.389 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:50.389 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66358 00:09:50.389 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:50.389 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:50.389 [global] 00:09:50.389 thread=1 00:09:50.389 invalidate=1 00:09:50.389 rw=read 00:09:50.389 time_based=1 00:09:50.389 runtime=10 00:09:50.389 ioengine=libaio 00:09:50.389 direct=1 00:09:50.389 bs=4096 00:09:50.389 iodepth=1 00:09:50.389 norandommap=1 00:09:50.389 numjobs=1 00:09:50.389 00:09:50.389 [job0] 00:09:50.389 filename=/dev/nvme0n1 00:09:50.389 [job1] 00:09:50.389 filename=/dev/nvme0n2 00:09:50.389 [job2] 00:09:50.389 filename=/dev/nvme0n3 00:09:50.389 [job3] 00:09:50.389 filename=/dev/nvme0n4 00:09:50.389 Could not set queue depth (nvme0n1) 00:09:50.389 Could not set queue depth (nvme0n2) 00:09:50.389 Could not set queue depth (nvme0n3) 00:09:50.389 Could not set queue depth (nvme0n4) 00:09:50.647 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.647 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.647 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.647 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.647 fio-3.35 00:09:50.647 Starting 4 threads 00:09:53.924 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:53.924 fio: pid=66406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:53.924 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63459328, buflen=4096 00:09:53.924 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:53.924 fio: pid=66404, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:53.924 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=59613184, buflen=4096 00:09:53.924 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.924 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:54.181 fio: pid=66402, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.181 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3325952, buflen=4096 00:09:54.181 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.181 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:54.439 fio: pid=66403, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.439 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14659584, buflen=4096 00:09:54.439 00:09:54.439 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66402: Tue Nov 26 19:17:52 2024 00:09:54.439 read: IOPS=5034, BW=19.7MiB/s (20.6MB/s)(67.2MiB/3416msec) 00:09:54.439 slat (usec): min=7, max=10779, avg=14.30, stdev=140.64 00:09:54.439 clat (usec): min=123, max=3020, avg=183.32, stdev=56.44 00:09:54.439 lat (usec): min=134, max=10952, avg=197.62, stdev=151.98 00:09:54.439 clat percentiles (usec): 00:09:54.439 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:54.439 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:09:54.439 | 70.00th=[ 188], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 255], 00:09:54.439 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 775], 99.95th=[ 930], 00:09:54.439 | 99.99th=[ 2343] 00:09:54.439 bw ( KiB/s): min=14976, max=23368, per=27.84%, avg=20194.67, stdev=3592.16, samples=6 00:09:54.439 iops : min= 3744, max= 5842, avg=5048.67, stdev=898.04, samples=6 00:09:54.439 lat (usec) : 250=92.09%, 500=7.70%, 750=0.09%, 1000=0.08% 00:09:54.439 lat (msec) : 2=0.02%, 4=0.01% 00:09:54.439 cpu : usr=1.23%, sys=5.74%, ctx=17206, majf=0, minf=1 00:09:54.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 issued rwts: total=17197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.439 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66403: Tue Nov 26 19:17:52 2024 00:09:54.439 read: IOPS=5386, BW=21.0MiB/s (22.1MB/s)(78.0MiB/3706msec) 00:09:54.439 slat (usec): min=10, max=11975, avg=16.32, stdev=150.71 00:09:54.439 clat (usec): min=3, max=4032, avg=167.99, stdev=49.40 00:09:54.439 lat (usec): min=135, max=12233, avg=184.31, stdev=159.15 00:09:54.439 clat percentiles (usec): 00:09:54.439 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:54.439 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:54.439 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 204], 00:09:54.439 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 465], 99.95th=[ 840], 00:09:54.439 | 99.99th=[ 3523] 00:09:54.439 bw ( KiB/s): min=19968, max=22592, per=29.63%, avg=21491.29, stdev=925.01, samples=7 00:09:54.439 iops : min= 4992, max= 5648, avg=5372.71, stdev=231.13, samples=7 00:09:54.439 lat (usec) : 4=0.01%, 50=0.01%, 250=99.59%, 500=0.31%, 750=0.03% 00:09:54.439 lat (usec) : 1000=0.02% 00:09:54.439 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:09:54.439 cpu : usr=1.84%, sys=6.48%, ctx=19994, majf=0, minf=2 00:09:54.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 issued rwts: total=19964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.439 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66404: Tue Nov 26 19:17:52 2024 00:09:54.439 read: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(56.9MiB/3159msec) 00:09:54.439 slat (usec): min=7, max=12784, avg=14.14, stdev=122.23 00:09:54.439 clat (usec): min=121, max=6750, avg=201.70, stdev=126.63 00:09:54.439 lat (usec): min=154, max=13069, avg=215.84, stdev=176.51 00:09:54.439 clat percentiles (usec): 00:09:54.439 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:09:54.439 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 196], 00:09:54.439 | 70.00th=[ 217], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 258], 00:09:54.439 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 1352], 99.95th=[ 3392], 00:09:54.439 | 99.99th=[ 6390] 00:09:54.439 bw ( KiB/s): min=14776, max=21648, per=25.62%, avg=18582.67, stdev=3063.41, samples=6 00:09:54.439 iops : min= 3694, max= 5412, avg=4645.67, stdev=765.85, samples=6 00:09:54.439 lat (usec) : 250=90.52%, 500=9.22%, 750=0.07%, 1000=0.07% 00:09:54.439 lat (msec) : 2=0.02%, 4=0.06%, 10=0.03% 00:09:54.439 cpu : usr=1.30%, sys=5.38%, ctx=14563, majf=0, minf=2 00:09:54.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 issued rwts: total=14555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.439 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66406: Tue Nov 26 19:17:52 2024 00:09:54.439 read: IOPS=5340, BW=20.9MiB/s (21.9MB/s)(60.5MiB/2901msec) 00:09:54.439 slat (nsec): min=8227, max=78884, avg=12465.21, stdev=2984.40 00:09:54.439 clat (usec): min=140, max=1149, avg=173.60, stdev=21.60 00:09:54.439 lat (usec): min=152, max=1161, avg=186.07, stdev=22.12 00:09:54.439 clat percentiles (usec): 00:09:54.439 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:09:54.439 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:54.439 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 208], 00:09:54.439 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 314], 00:09:54.439 | 99.99th=[ 1020] 00:09:54.439 bw ( KiB/s): min=19848, max=22288, per=29.35%, avg=21291.20, stdev=963.54, samples=5 00:09:54.439 iops : min= 4962, max= 5572, avg=5322.80, stdev=240.88, samples=5 00:09:54.439 lat (usec) : 250=99.70%, 500=0.26%, 750=0.01%, 1000=0.01% 00:09:54.439 lat (msec) : 2=0.01% 00:09:54.439 cpu : usr=1.48%, sys=6.00%, ctx=15495, majf=0, minf=1 00:09:54.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.439 issued rwts: total=15494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.439 00:09:54.439 Run status group 0 (all jobs): 00:09:54.439 READ: bw=70.8MiB/s (74.3MB/s), 18.0MiB/s-21.0MiB/s (18.9MB/s-22.1MB/s), io=263MiB (275MB), run=2901-3706msec 00:09:54.439 00:09:54.439 Disk stats (read/write): 00:09:54.439 nvme0n1: ios=16888/0, merge=0/0, ticks=3098/0, in_queue=3098, util=95.25% 00:09:54.439 nvme0n2: ios=19444/0, merge=0/0, ticks=3298/0, in_queue=3298, util=95.51% 00:09:54.439 nvme0n3: ios=14396/0, merge=0/0, ticks=2838/0, in_queue=2838, util=95.59% 00:09:54.439 nvme0n4: ios=15354/0, merge=0/0, ticks=2690/0, in_queue=2690, util=96.73% 00:09:54.439 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.439 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:54.697 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.697 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:54.954 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.954 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:55.212 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.212 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:55.470 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.470 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66358 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:55.792 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.050 nvmf hotplug test: fio failed as expected 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:56.050 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.308 rmmod nvme_tcp 00:09:56.308 rmmod nvme_fabrics 00:09:56.308 rmmod nvme_keyring 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65983 ']' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65983 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65983 ']' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65983 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65983 00:09:56.308 killing process with pid 65983 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65983' 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65983 00:09:56.308 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65983 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:56.566 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:56.824 00:09:56.824 real 0m19.788s 00:09:56.824 user 1m13.380s 00:09:56.824 sys 0m10.643s 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 ************************************ 00:09:56.824 END TEST nvmf_fio_target 00:09:56.824 ************************************ 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 ************************************ 00:09:56.824 START TEST nvmf_bdevio 00:09:56.824 ************************************ 00:09:56.824 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.824 * Looking for test storage... 00:09:56.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.825 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.825 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.825 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.084 --rc genhtml_legend=1 00:09:57.084 --rc geninfo_all_blocks=1 00:09:57.084 --rc geninfo_unexecuted_blocks=1 00:09:57.084 00:09:57.084 ' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.084 --rc genhtml_branch_coverage=1 00:09:57.084 --rc genhtml_function_coverage=1 00:09:57.084 --rc genhtml_legend=1 00:09:57.084 --rc geninfo_all_blocks=1 00:09:57.084 --rc geninfo_unexecuted_blocks=1 00:09:57.084 00:09:57.084 ' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.084 Cannot find device "nvmf_init_br" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.084 Cannot find device "nvmf_init_br2" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.084 Cannot find device "nvmf_tgt_br" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.084 Cannot find device "nvmf_tgt_br2" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.084 Cannot find device "nvmf_init_br" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.084 Cannot find device "nvmf_init_br2" 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:57.084 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.084 Cannot find device "nvmf_tgt_br" 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.085 Cannot find device "nvmf_tgt_br2" 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.085 Cannot find device "nvmf_br" 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.085 Cannot find device "nvmf_init_if" 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.085 Cannot find device "nvmf_init_if2" 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.085 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:57.343 00:09:57.343 --- 10.0.0.3 ping statistics --- 00:09:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.343 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:57.343 00:09:57.343 --- 10.0.0.4 ping statistics --- 00:09:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.343 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:57.343 00:09:57.343 --- 10.0.0.1 ping statistics --- 00:09:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.343 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:57.343 00:09:57.343 --- 10.0.0.2 ping statistics --- 00:09:57.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.343 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.343 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66723 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66723 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66723 ']' 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.344 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.602 [2024-11-26 19:17:55.795958] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:57.602 [2024-11-26 19:17:55.796049] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.602 [2024-11-26 19:17:55.945483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.602 [2024-11-26 19:17:55.998678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.602 [2024-11-26 19:17:55.998745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.602 [2024-11-26 19:17:55.998771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.602 [2024-11-26 19:17:55.998779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.602 [2024-11-26 19:17:55.998786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.602 [2024-11-26 19:17:56.000261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.602 [2024-11-26 19:17:56.000443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.602 [2024-11-26 19:17:56.000591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.602 [2024-11-26 19:17:56.000591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.860 [2024-11-26 19:17:56.056976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.860 [2024-11-26 19:17:56.168260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.860 Malloc0 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.860 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.861 [2024-11-26 19:17:56.238978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.861 { 00:09:57.861 "params": { 00:09:57.861 "name": "Nvme$subsystem", 00:09:57.861 "trtype": "$TEST_TRANSPORT", 00:09:57.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.861 "adrfam": "ipv4", 00:09:57.861 "trsvcid": "$NVMF_PORT", 00:09:57.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.861 "hdgst": ${hdgst:-false}, 00:09:57.861 "ddgst": ${ddgst:-false} 00:09:57.861 }, 00:09:57.861 "method": "bdev_nvme_attach_controller" 00:09:57.861 } 00:09:57.861 EOF 00:09:57.861 )") 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:57.861 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.861 "params": { 00:09:57.861 "name": "Nvme1", 00:09:57.861 "trtype": "tcp", 00:09:57.861 "traddr": "10.0.0.3", 00:09:57.861 "adrfam": "ipv4", 00:09:57.861 "trsvcid": "4420", 00:09:57.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.861 "hdgst": false, 00:09:57.861 "ddgst": false 00:09:57.861 }, 00:09:57.861 "method": "bdev_nvme_attach_controller" 00:09:57.861 }' 00:09:58.119 [2024-11-26 19:17:56.302141] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:09:58.119 [2024-11-26 19:17:56.302680] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66752 ] 00:09:58.119 [2024-11-26 19:17:56.453296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.119 [2024-11-26 19:17:56.513473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.119 [2024-11-26 19:17:56.513611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.119 [2024-11-26 19:17:56.513772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.377 [2024-11-26 19:17:56.579995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.377 I/O targets: 00:09:58.377 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:58.377 00:09:58.377 00:09:58.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.377 http://cunit.sourceforge.net/ 00:09:58.377 00:09:58.377 00:09:58.377 Suite: bdevio tests on: Nvme1n1 00:09:58.377 Test: blockdev write read block ...passed 00:09:58.377 Test: blockdev write zeroes read block ...passed 00:09:58.377 Test: blockdev write zeroes read no split ...passed 00:09:58.377 Test: blockdev write zeroes read split ...passed 00:09:58.377 Test: blockdev write zeroes read split partial ...passed 00:09:58.377 Test: blockdev reset ...[2024-11-26 19:17:56.727081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:58.377 [2024-11-26 19:17:56.727181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a1190 (9): Bad file descriptor 00:09:58.377 passed 00:09:58.377 Test: blockdev write read 8 blocks ...[2024-11-26 19:17:56.745299] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:58.377 passed 00:09:58.377 Test: blockdev write read size > 128k ...passed 00:09:58.377 Test: blockdev write read invalid size ...passed 00:09:58.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.377 Test: blockdev write read max offset ...passed 00:09:58.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.377 Test: blockdev writev readv 8 blocks ...passed 00:09:58.377 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.377 Test: blockdev writev readv block ...passed 00:09:58.377 Test: blockdev writev readv size > 128k ...passed 00:09:58.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.377 Test: blockdev comparev and writev ...[2024-11-26 19:17:56.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.753193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:58.377 passed 00:09:58.377 Test: blockdev nvme passthru rw ...[2024-11-26 19:17:56.753587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.753943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.753984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.753997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.754317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.754357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.377 [2024-11-26 19:17:56.754369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:58.377 passed 00:09:58.377 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:17:56.755211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.377 [2024-11-26 19:17:56.755243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.755361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.377 [2024-11-26 19:17:56.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.755493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.377 [2024-11-26 19:17:56.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:58.377 [2024-11-26 19:17:56.755620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.377 [2024-11-26 19:17:56.755638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:58.377 passed 00:09:58.377 Test: blockdev nvme admin passthru ...passed 00:09:58.377 Test: blockdev copy ...passed 00:09:58.377 00:09:58.377 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.377 suites 1 1 n/a 0 0 00:09:58.377 tests 23 23 23 0 0 00:09:58.377 asserts 152 152 152 0 n/a 00:09:58.377 00:09:58.377 Elapsed time = 0.143 seconds 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.636 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.636 rmmod nvme_tcp 00:09:58.636 rmmod nvme_fabrics 00:09:58.636 rmmod nvme_keyring 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66723 ']' 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66723 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66723 ']' 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66723 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.636 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66723 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:58.894 killing process with pid 66723 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66723' 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66723 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66723 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.894 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:59.152 00:09:59.152 real 0m2.415s 00:09:59.152 user 0m6.457s 00:09:59.152 sys 0m0.873s 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.152 ************************************ 00:09:59.152 END TEST nvmf_bdevio 00:09:59.152 ************************************ 00:09:59.152 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:59.410 ************************************ 00:09:59.410 END TEST nvmf_target_core 00:09:59.410 ************************************ 00:09:59.410 00:09:59.410 real 2m34.774s 00:09:59.410 user 6m42.659s 00:09:59.410 sys 0m54.612s 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.410 19:17:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.410 19:17:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.410 19:17:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.410 19:17:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.410 ************************************ 00:09:59.410 START TEST nvmf_target_extra 00:09:59.410 ************************************ 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.410 * Looking for test storage... 00:09:59.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.410 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.411 --rc genhtml_branch_coverage=1 00:09:59.411 --rc genhtml_function_coverage=1 00:09:59.411 --rc genhtml_legend=1 00:09:59.411 --rc geninfo_all_blocks=1 00:09:59.411 --rc geninfo_unexecuted_blocks=1 00:09:59.411 00:09:59.411 ' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.411 --rc genhtml_branch_coverage=1 00:09:59.411 --rc genhtml_function_coverage=1 00:09:59.411 --rc genhtml_legend=1 00:09:59.411 --rc geninfo_all_blocks=1 00:09:59.411 --rc geninfo_unexecuted_blocks=1 00:09:59.411 00:09:59.411 ' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.411 --rc genhtml_branch_coverage=1 00:09:59.411 --rc genhtml_function_coverage=1 00:09:59.411 --rc genhtml_legend=1 00:09:59.411 --rc geninfo_all_blocks=1 00:09:59.411 --rc geninfo_unexecuted_blocks=1 00:09:59.411 00:09:59.411 ' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.411 --rc genhtml_branch_coverage=1 00:09:59.411 --rc genhtml_function_coverage=1 00:09:59.411 --rc genhtml_legend=1 00:09:59.411 --rc geninfo_all_blocks=1 00:09:59.411 --rc geninfo_unexecuted_blocks=1 00:09:59.411 00:09:59.411 ' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.411 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.411 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.671 ************************************ 00:09:59.671 START TEST nvmf_auth_target 00:09:59.671 ************************************ 00:09:59.671 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.671 * Looking for test storage... 00:09:59.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.671 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.671 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.671 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.671 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.672 Cannot find device "nvmf_init_br" 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.672 Cannot find device "nvmf_init_br2" 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.672 Cannot find device "nvmf_tgt_br" 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:59.672 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.930 Cannot find device "nvmf_tgt_br2" 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.930 Cannot find device "nvmf_init_br" 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.930 Cannot find device "nvmf_init_br2" 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.930 Cannot find device "nvmf_tgt_br" 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:59.930 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.930 Cannot find device "nvmf_tgt_br2" 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.931 Cannot find device "nvmf_br" 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.931 Cannot find device "nvmf_init_if" 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.931 Cannot find device "nvmf_init_if2" 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:59.931 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:10:00.189 00:10:00.189 --- 10.0.0.3 ping statistics --- 00:10:00.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.189 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.189 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.189 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:10:00.189 00:10:00.189 --- 10.0.0.4 ping statistics --- 00:10:00.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.189 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:00.189 00:10:00.189 --- 10.0.0.1 ping statistics --- 00:10:00.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.189 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:10:00.189 00:10:00.189 --- 10.0.0.2 ping statistics --- 00:10:00.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.189 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67036 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67036 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67036 ']' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.189 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.122 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.122 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:01.122 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.122 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.122 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67068 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5320c27ff9a0bf9156fa63bf1cdf22b84db775fe87b6602a 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vAE 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5320c27ff9a0bf9156fa63bf1cdf22b84db775fe87b6602a 0 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5320c27ff9a0bf9156fa63bf1cdf22b84db775fe87b6602a 0 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5320c27ff9a0bf9156fa63bf1cdf22b84db775fe87b6602a 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vAE 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vAE 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vAE 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1ef88595ff3fa503bbf2e0b678af97c929c8fcb2d1ef1015d646f38918c1aef2 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KY0 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1ef88595ff3fa503bbf2e0b678af97c929c8fcb2d1ef1015d646f38918c1aef2 3 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1ef88595ff3fa503bbf2e0b678af97c929c8fcb2d1ef1015d646f38918c1aef2 3 00:10:01.380 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1ef88595ff3fa503bbf2e0b678af97c929c8fcb2d1ef1015d646f38918c1aef2 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KY0 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KY0 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.KY0 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8f9f7ea2e493fed9ab0905d006f541ab 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p39 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8f9f7ea2e493fed9ab0905d006f541ab 1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8f9f7ea2e493fed9ab0905d006f541ab 1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8f9f7ea2e493fed9ab0905d006f541ab 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p39 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p39 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.p39 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51093f3ced566fccf9341ed35d0cbe8f8b1beb65c76e3d32 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yad 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51093f3ced566fccf9341ed35d0cbe8f8b1beb65c76e3d32 2 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51093f3ced566fccf9341ed35d0cbe8f8b1beb65c76e3d32 2 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51093f3ced566fccf9341ed35d0cbe8f8b1beb65c76e3d32 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:01.381 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yad 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yad 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Yad 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b46f1aebe3bfa10c2d1f1e8d3d639102a58b081d74c21a6e 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gfT 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b46f1aebe3bfa10c2d1f1e8d3d639102a58b081d74c21a6e 2 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b46f1aebe3bfa10c2d1f1e8d3d639102a58b081d74c21a6e 2 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b46f1aebe3bfa10c2d1f1e8d3d639102a58b081d74c21a6e 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gfT 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gfT 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.gfT 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0378ab69136fbc9b8ee1a3788d029ca5 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7OW 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0378ab69136fbc9b8ee1a3788d029ca5 1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0378ab69136fbc9b8ee1a3788d029ca5 1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0378ab69136fbc9b8ee1a3788d029ca5 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7OW 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7OW 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7OW 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe3c1268361c8203cbc8c727719d24b7be01044a3232c8b17d26fa3aa4bde18d 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Rdc 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe3c1268361c8203cbc8c727719d24b7be01044a3232c8b17d26fa3aa4bde18d 3 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe3c1268361c8203cbc8c727719d24b7be01044a3232c8b17d26fa3aa4bde18d 3 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe3c1268361c8203cbc8c727719d24b7be01044a3232c8b17d26fa3aa4bde18d 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:01.640 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Rdc 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Rdc 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Rdc 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67036 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67036 ']' 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.640 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67068 /var/tmp/host.sock 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67068 ']' 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vAE 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vAE 00:10:02.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vAE 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.KY0 ]] 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KY0 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KY0 00:10:02.721 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KY0 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.p39 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.p39 00:10:02.978 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.p39 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Yad ]] 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yad 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yad 00:10:03.236 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yad 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gfT 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gfT 00:10:03.495 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gfT 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7OW ]] 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7OW 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7OW 00:10:03.753 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7OW 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rdc 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Rdc 00:10:04.011 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Rdc 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:04.269 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.527 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.786 00:10:04.786 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.786 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.786 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.045 { 00:10:05.045 "cntlid": 1, 00:10:05.045 "qid": 0, 00:10:05.045 "state": "enabled", 00:10:05.045 "thread": "nvmf_tgt_poll_group_000", 00:10:05.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:05.045 "listen_address": { 00:10:05.045 "trtype": "TCP", 00:10:05.045 "adrfam": "IPv4", 00:10:05.045 "traddr": "10.0.0.3", 00:10:05.045 "trsvcid": "4420" 00:10:05.045 }, 00:10:05.045 "peer_address": { 00:10:05.045 "trtype": "TCP", 00:10:05.045 "adrfam": "IPv4", 00:10:05.045 "traddr": "10.0.0.1", 00:10:05.045 "trsvcid": "59944" 00:10:05.045 }, 00:10:05.045 "auth": { 00:10:05.045 "state": "completed", 00:10:05.045 "digest": "sha256", 00:10:05.045 "dhgroup": "null" 00:10:05.045 } 00:10:05.045 } 00:10:05.045 ]' 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.045 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.303 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:05.303 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.303 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.304 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.304 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.562 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:05.562 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.747 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.747 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.006 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.264 { 00:10:10.264 "cntlid": 3, 00:10:10.264 "qid": 0, 00:10:10.264 "state": "enabled", 00:10:10.264 "thread": "nvmf_tgt_poll_group_000", 00:10:10.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:10.264 "listen_address": { 00:10:10.264 "trtype": "TCP", 00:10:10.264 "adrfam": "IPv4", 00:10:10.264 "traddr": "10.0.0.3", 00:10:10.264 "trsvcid": "4420" 00:10:10.264 }, 00:10:10.264 "peer_address": { 00:10:10.264 "trtype": "TCP", 00:10:10.264 "adrfam": "IPv4", 00:10:10.264 "traddr": "10.0.0.1", 00:10:10.264 "trsvcid": "52552" 00:10:10.264 }, 00:10:10.264 "auth": { 00:10:10.264 "state": "completed", 00:10:10.264 "digest": "sha256", 00:10:10.264 "dhgroup": "null" 00:10:10.264 } 00:10:10.264 } 00:10:10.264 ]' 00:10:10.264 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.522 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.780 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:10.780 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.344 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.603 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.860 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.860 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.860 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.860 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.118 00:10:12.118 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.118 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.118 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.376 { 00:10:12.376 "cntlid": 5, 00:10:12.376 "qid": 0, 00:10:12.376 "state": "enabled", 00:10:12.376 "thread": "nvmf_tgt_poll_group_000", 00:10:12.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:12.376 "listen_address": { 00:10:12.376 "trtype": "TCP", 00:10:12.376 "adrfam": "IPv4", 00:10:12.376 "traddr": "10.0.0.3", 00:10:12.376 "trsvcid": "4420" 00:10:12.376 }, 00:10:12.376 "peer_address": { 00:10:12.376 "trtype": "TCP", 00:10:12.376 "adrfam": "IPv4", 00:10:12.376 "traddr": "10.0.0.1", 00:10:12.376 "trsvcid": "52580" 00:10:12.376 }, 00:10:12.376 "auth": { 00:10:12.376 "state": "completed", 00:10:12.376 "digest": "sha256", 00:10:12.376 "dhgroup": "null" 00:10:12.376 } 00:10:12.376 } 00:10:12.376 ]' 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.376 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.634 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:12.634 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:13.288 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:13.855 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.112 00:10:14.112 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.112 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.112 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.369 { 00:10:14.369 "cntlid": 7, 00:10:14.369 "qid": 0, 00:10:14.369 "state": "enabled", 00:10:14.369 "thread": "nvmf_tgt_poll_group_000", 00:10:14.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:14.369 "listen_address": { 00:10:14.369 "trtype": "TCP", 00:10:14.369 "adrfam": "IPv4", 00:10:14.369 "traddr": "10.0.0.3", 00:10:14.369 "trsvcid": "4420" 00:10:14.369 }, 00:10:14.369 "peer_address": { 00:10:14.369 "trtype": "TCP", 00:10:14.369 "adrfam": "IPv4", 00:10:14.369 "traddr": "10.0.0.1", 00:10:14.369 "trsvcid": "52606" 00:10:14.369 }, 00:10:14.369 "auth": { 00:10:14.369 "state": "completed", 00:10:14.369 "digest": "sha256", 00:10:14.369 "dhgroup": "null" 00:10:14.369 } 00:10:14.369 } 00:10:14.369 ]' 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.369 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.626 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:14.626 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.611 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.611 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.870 00:10:16.128 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.128 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.128 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.385 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.385 { 00:10:16.385 "cntlid": 9, 00:10:16.385 "qid": 0, 00:10:16.385 "state": "enabled", 00:10:16.385 "thread": "nvmf_tgt_poll_group_000", 00:10:16.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:16.385 "listen_address": { 00:10:16.385 "trtype": "TCP", 00:10:16.385 "adrfam": "IPv4", 00:10:16.385 "traddr": "10.0.0.3", 00:10:16.385 "trsvcid": "4420" 00:10:16.385 }, 00:10:16.385 "peer_address": { 00:10:16.385 "trtype": "TCP", 00:10:16.385 "adrfam": "IPv4", 00:10:16.385 "traddr": "10.0.0.1", 00:10:16.385 "trsvcid": "52620" 00:10:16.385 }, 00:10:16.385 "auth": { 00:10:16.385 "state": "completed", 00:10:16.385 "digest": "sha256", 00:10:16.385 "dhgroup": "ffdhe2048" 00:10:16.385 } 00:10:16.385 } 00:10:16.385 ]' 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.386 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.644 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:16.644 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.580 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.839 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.115 00:10:18.115 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.115 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.115 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.373 { 00:10:18.373 "cntlid": 11, 00:10:18.373 "qid": 0, 00:10:18.373 "state": "enabled", 00:10:18.373 "thread": "nvmf_tgt_poll_group_000", 00:10:18.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:18.373 "listen_address": { 00:10:18.373 "trtype": "TCP", 00:10:18.373 "adrfam": "IPv4", 00:10:18.373 "traddr": "10.0.0.3", 00:10:18.373 "trsvcid": "4420" 00:10:18.373 }, 00:10:18.373 "peer_address": { 00:10:18.373 "trtype": "TCP", 00:10:18.373 "adrfam": "IPv4", 00:10:18.373 "traddr": "10.0.0.1", 00:10:18.373 "trsvcid": "52660" 00:10:18.373 }, 00:10:18.373 "auth": { 00:10:18.373 "state": "completed", 00:10:18.373 "digest": "sha256", 00:10:18.373 "dhgroup": "ffdhe2048" 00:10:18.373 } 00:10:18.373 } 00:10:18.373 ]' 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.373 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.940 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:18.940 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.507 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.767 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.026 00:10:20.285 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.285 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.285 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.544 { 00:10:20.544 "cntlid": 13, 00:10:20.544 "qid": 0, 00:10:20.544 "state": "enabled", 00:10:20.544 "thread": "nvmf_tgt_poll_group_000", 00:10:20.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:20.544 "listen_address": { 00:10:20.544 "trtype": "TCP", 00:10:20.544 "adrfam": "IPv4", 00:10:20.544 "traddr": "10.0.0.3", 00:10:20.544 "trsvcid": "4420" 00:10:20.544 }, 00:10:20.544 "peer_address": { 00:10:20.544 "trtype": "TCP", 00:10:20.544 "adrfam": "IPv4", 00:10:20.544 "traddr": "10.0.0.1", 00:10:20.544 "trsvcid": "39890" 00:10:20.544 }, 00:10:20.544 "auth": { 00:10:20.544 "state": "completed", 00:10:20.544 "digest": "sha256", 00:10:20.544 "dhgroup": "ffdhe2048" 00:10:20.544 } 00:10:20.544 } 00:10:20.544 ]' 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.544 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.802 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:20.802 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.736 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:21.736 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.301 00:10:22.301 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.301 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.301 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.589 { 00:10:22.589 "cntlid": 15, 00:10:22.589 "qid": 0, 00:10:22.589 "state": "enabled", 00:10:22.589 "thread": "nvmf_tgt_poll_group_000", 00:10:22.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:22.589 "listen_address": { 00:10:22.589 "trtype": "TCP", 00:10:22.589 "adrfam": "IPv4", 00:10:22.589 "traddr": "10.0.0.3", 00:10:22.589 "trsvcid": "4420" 00:10:22.589 }, 00:10:22.589 "peer_address": { 00:10:22.589 "trtype": "TCP", 00:10:22.589 "adrfam": "IPv4", 00:10:22.589 "traddr": "10.0.0.1", 00:10:22.589 "trsvcid": "39916" 00:10:22.589 }, 00:10:22.589 "auth": { 00:10:22.589 "state": "completed", 00:10:22.589 "digest": "sha256", 00:10:22.589 "dhgroup": "ffdhe2048" 00:10:22.589 } 00:10:22.589 } 00:10:22.589 ]' 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.589 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.847 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:22.847 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:23.413 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.413 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:23.413 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.413 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.413 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.414 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:23.414 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.414 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.414 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.980 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.288 00:10:24.288 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.288 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.288 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.576 { 00:10:24.576 "cntlid": 17, 00:10:24.576 "qid": 0, 00:10:24.576 "state": "enabled", 00:10:24.576 "thread": "nvmf_tgt_poll_group_000", 00:10:24.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:24.576 "listen_address": { 00:10:24.576 "trtype": "TCP", 00:10:24.576 "adrfam": "IPv4", 00:10:24.576 "traddr": "10.0.0.3", 00:10:24.576 "trsvcid": "4420" 00:10:24.576 }, 00:10:24.576 "peer_address": { 00:10:24.576 "trtype": "TCP", 00:10:24.576 "adrfam": "IPv4", 00:10:24.576 "traddr": "10.0.0.1", 00:10:24.576 "trsvcid": "39942" 00:10:24.576 }, 00:10:24.576 "auth": { 00:10:24.576 "state": "completed", 00:10:24.576 "digest": "sha256", 00:10:24.576 "dhgroup": "ffdhe3072" 00:10:24.576 } 00:10:24.576 } 00:10:24.576 ]' 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.576 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.835 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:24.835 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.771 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.771 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.340 00:10:26.340 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.340 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.340 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.599 { 00:10:26.599 "cntlid": 19, 00:10:26.599 "qid": 0, 00:10:26.599 "state": "enabled", 00:10:26.599 "thread": "nvmf_tgt_poll_group_000", 00:10:26.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:26.599 "listen_address": { 00:10:26.599 "trtype": "TCP", 00:10:26.599 "adrfam": "IPv4", 00:10:26.599 "traddr": "10.0.0.3", 00:10:26.599 "trsvcid": "4420" 00:10:26.599 }, 00:10:26.599 "peer_address": { 00:10:26.599 "trtype": "TCP", 00:10:26.599 "adrfam": "IPv4", 00:10:26.599 "traddr": "10.0.0.1", 00:10:26.599 "trsvcid": "39966" 00:10:26.599 }, 00:10:26.599 "auth": { 00:10:26.599 "state": "completed", 00:10:26.599 "digest": "sha256", 00:10:26.599 "dhgroup": "ffdhe3072" 00:10:26.599 } 00:10:26.599 } 00:10:26.599 ]' 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:26.599 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.599 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.599 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.599 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.858 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:26.858 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:27.794 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.053 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.312 00:10:28.312 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.312 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.312 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.571 { 00:10:28.571 "cntlid": 21, 00:10:28.571 "qid": 0, 00:10:28.571 "state": "enabled", 00:10:28.571 "thread": "nvmf_tgt_poll_group_000", 00:10:28.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:28.571 "listen_address": { 00:10:28.571 "trtype": "TCP", 00:10:28.571 "adrfam": "IPv4", 00:10:28.571 "traddr": "10.0.0.3", 00:10:28.571 "trsvcid": "4420" 00:10:28.571 }, 00:10:28.571 "peer_address": { 00:10:28.571 "trtype": "TCP", 00:10:28.571 "adrfam": "IPv4", 00:10:28.571 "traddr": "10.0.0.1", 00:10:28.571 "trsvcid": "39994" 00:10:28.571 }, 00:10:28.571 "auth": { 00:10:28.571 "state": "completed", 00:10:28.571 "digest": "sha256", 00:10:28.571 "dhgroup": "ffdhe3072" 00:10:28.571 } 00:10:28.571 } 00:10:28.571 ]' 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:28.571 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.830 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.830 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.830 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.831 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:28.831 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.766 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.025 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.284 00:10:30.284 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.284 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.284 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.543 { 00:10:30.543 "cntlid": 23, 00:10:30.543 "qid": 0, 00:10:30.543 "state": "enabled", 00:10:30.543 "thread": "nvmf_tgt_poll_group_000", 00:10:30.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:30.543 "listen_address": { 00:10:30.543 "trtype": "TCP", 00:10:30.543 "adrfam": "IPv4", 00:10:30.543 "traddr": "10.0.0.3", 00:10:30.543 "trsvcid": "4420" 00:10:30.543 }, 00:10:30.543 "peer_address": { 00:10:30.543 "trtype": "TCP", 00:10:30.543 "adrfam": "IPv4", 00:10:30.543 "traddr": "10.0.0.1", 00:10:30.543 "trsvcid": "47988" 00:10:30.543 }, 00:10:30.543 "auth": { 00:10:30.543 "state": "completed", 00:10:30.543 "digest": "sha256", 00:10:30.543 "dhgroup": "ffdhe3072" 00:10:30.543 } 00:10:30.543 } 00:10:30.543 ]' 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.543 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.802 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:30.802 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.802 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.802 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.802 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.060 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:31.060 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.626 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.885 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.452 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.452 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.452 { 00:10:32.452 "cntlid": 25, 00:10:32.452 "qid": 0, 00:10:32.452 "state": "enabled", 00:10:32.452 "thread": "nvmf_tgt_poll_group_000", 00:10:32.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:32.452 "listen_address": { 00:10:32.452 "trtype": "TCP", 00:10:32.452 "adrfam": "IPv4", 00:10:32.452 "traddr": "10.0.0.3", 00:10:32.452 "trsvcid": "4420" 00:10:32.452 }, 00:10:32.452 "peer_address": { 00:10:32.452 "trtype": "TCP", 00:10:32.452 "adrfam": "IPv4", 00:10:32.452 "traddr": "10.0.0.1", 00:10:32.452 "trsvcid": "48030" 00:10:32.452 }, 00:10:32.452 "auth": { 00:10:32.452 "state": "completed", 00:10:32.452 "digest": "sha256", 00:10:32.452 "dhgroup": "ffdhe4096" 00:10:32.452 } 00:10:32.452 } 00:10:32.452 ]' 00:10:32.711 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.711 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.711 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.711 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.711 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.711 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.711 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.711 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.970 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:32.970 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.539 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.797 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.363 00:10:34.363 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.364 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.364 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.622 { 00:10:34.622 "cntlid": 27, 00:10:34.622 "qid": 0, 00:10:34.622 "state": "enabled", 00:10:34.622 "thread": "nvmf_tgt_poll_group_000", 00:10:34.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:34.622 "listen_address": { 00:10:34.622 "trtype": "TCP", 00:10:34.622 "adrfam": "IPv4", 00:10:34.622 "traddr": "10.0.0.3", 00:10:34.622 "trsvcid": "4420" 00:10:34.622 }, 00:10:34.622 "peer_address": { 00:10:34.622 "trtype": "TCP", 00:10:34.622 "adrfam": "IPv4", 00:10:34.622 "traddr": "10.0.0.1", 00:10:34.622 "trsvcid": "48076" 00:10:34.622 }, 00:10:34.622 "auth": { 00:10:34.622 "state": "completed", 00:10:34.622 "digest": "sha256", 00:10:34.622 "dhgroup": "ffdhe4096" 00:10:34.622 } 00:10:34.622 } 00:10:34.622 ]' 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.622 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.622 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.622 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.881 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.881 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.881 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.139 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:35.139 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:35.706 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.706 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:35.706 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.706 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.706 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.706 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.706 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.706 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.965 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.223 00:10:36.223 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.223 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.223 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.791 { 00:10:36.791 "cntlid": 29, 00:10:36.791 "qid": 0, 00:10:36.791 "state": "enabled", 00:10:36.791 "thread": "nvmf_tgt_poll_group_000", 00:10:36.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:36.791 "listen_address": { 00:10:36.791 "trtype": "TCP", 00:10:36.791 "adrfam": "IPv4", 00:10:36.791 "traddr": "10.0.0.3", 00:10:36.791 "trsvcid": "4420" 00:10:36.791 }, 00:10:36.791 "peer_address": { 00:10:36.791 "trtype": "TCP", 00:10:36.791 "adrfam": "IPv4", 00:10:36.791 "traddr": "10.0.0.1", 00:10:36.791 "trsvcid": "48114" 00:10:36.791 }, 00:10:36.791 "auth": { 00:10:36.791 "state": "completed", 00:10:36.791 "digest": "sha256", 00:10:36.791 "dhgroup": "ffdhe4096" 00:10:36.791 } 00:10:36.791 } 00:10:36.791 ]' 00:10:36.791 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.791 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.050 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:37.050 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.616 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.875 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:37.876 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.876 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:38.135 00:10:38.135 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.135 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.135 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.394 { 00:10:38.394 "cntlid": 31, 00:10:38.394 "qid": 0, 00:10:38.394 "state": "enabled", 00:10:38.394 "thread": "nvmf_tgt_poll_group_000", 00:10:38.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:38.394 "listen_address": { 00:10:38.394 "trtype": "TCP", 00:10:38.394 "adrfam": "IPv4", 00:10:38.394 "traddr": "10.0.0.3", 00:10:38.394 "trsvcid": "4420" 00:10:38.394 }, 00:10:38.394 "peer_address": { 00:10:38.394 "trtype": "TCP", 00:10:38.394 "adrfam": "IPv4", 00:10:38.394 "traddr": "10.0.0.1", 00:10:38.394 "trsvcid": "48154" 00:10:38.394 }, 00:10:38.394 "auth": { 00:10:38.394 "state": "completed", 00:10:38.394 "digest": "sha256", 00:10:38.394 "dhgroup": "ffdhe4096" 00:10:38.394 } 00:10:38.394 } 00:10:38.394 ]' 00:10:38.394 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.653 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.912 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:38.912 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.497 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.787 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.361 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.361 { 00:10:40.361 "cntlid": 33, 00:10:40.361 "qid": 0, 00:10:40.361 "state": "enabled", 00:10:40.361 "thread": "nvmf_tgt_poll_group_000", 00:10:40.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:40.361 "listen_address": { 00:10:40.361 "trtype": "TCP", 00:10:40.361 "adrfam": "IPv4", 00:10:40.361 "traddr": "10.0.0.3", 00:10:40.361 "trsvcid": "4420" 00:10:40.361 }, 00:10:40.361 "peer_address": { 00:10:40.361 "trtype": "TCP", 00:10:40.361 "adrfam": "IPv4", 00:10:40.361 "traddr": "10.0.0.1", 00:10:40.361 "trsvcid": "46604" 00:10:40.361 }, 00:10:40.361 "auth": { 00:10:40.361 "state": "completed", 00:10:40.361 "digest": "sha256", 00:10:40.361 "dhgroup": "ffdhe6144" 00:10:40.361 } 00:10:40.361 } 00:10:40.361 ]' 00:10:40.361 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.620 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.879 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:40.879 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:41.445 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.445 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:41.445 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.445 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.703 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.703 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.703 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.703 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.961 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.219 00:10:42.219 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.219 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.219 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.477 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.477 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.477 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.737 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.737 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.737 { 00:10:42.737 "cntlid": 35, 00:10:42.737 "qid": 0, 00:10:42.737 "state": "enabled", 00:10:42.737 "thread": "nvmf_tgt_poll_group_000", 00:10:42.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:42.737 "listen_address": { 00:10:42.737 "trtype": "TCP", 00:10:42.737 "adrfam": "IPv4", 00:10:42.737 "traddr": "10.0.0.3", 00:10:42.737 "trsvcid": "4420" 00:10:42.737 }, 00:10:42.737 "peer_address": { 00:10:42.737 "trtype": "TCP", 00:10:42.737 "adrfam": "IPv4", 00:10:42.737 "traddr": "10.0.0.1", 00:10:42.737 "trsvcid": "46646" 00:10:42.737 }, 00:10:42.737 "auth": { 00:10:42.737 "state": "completed", 00:10:42.737 "digest": "sha256", 00:10:42.737 "dhgroup": "ffdhe6144" 00:10:42.737 } 00:10:42.737 } 00:10:42.737 ]' 00:10:42.737 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.737 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.737 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.737 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:42.737 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.737 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.737 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.737 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.995 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:42.995 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.562 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.129 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:44.129 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.129 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:44.129 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:44.129 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.130 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.388 00:10:44.388 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.388 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.388 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.647 { 00:10:44.647 "cntlid": 37, 00:10:44.647 "qid": 0, 00:10:44.647 "state": "enabled", 00:10:44.647 "thread": "nvmf_tgt_poll_group_000", 00:10:44.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:44.647 "listen_address": { 00:10:44.647 "trtype": "TCP", 00:10:44.647 "adrfam": "IPv4", 00:10:44.647 "traddr": "10.0.0.3", 00:10:44.647 "trsvcid": "4420" 00:10:44.647 }, 00:10:44.647 "peer_address": { 00:10:44.647 "trtype": "TCP", 00:10:44.647 "adrfam": "IPv4", 00:10:44.647 "traddr": "10.0.0.1", 00:10:44.647 "trsvcid": "46664" 00:10:44.647 }, 00:10:44.647 "auth": { 00:10:44.647 "state": "completed", 00:10:44.647 "digest": "sha256", 00:10:44.647 "dhgroup": "ffdhe6144" 00:10:44.647 } 00:10:44.647 } 00:10:44.647 ]' 00:10:44.647 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.647 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.647 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.906 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:44.906 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.906 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.906 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.906 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.165 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:45.166 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.733 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:45.992 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:46.560 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.560 { 00:10:46.560 "cntlid": 39, 00:10:46.560 "qid": 0, 00:10:46.560 "state": "enabled", 00:10:46.560 "thread": "nvmf_tgt_poll_group_000", 00:10:46.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:46.560 "listen_address": { 00:10:46.560 "trtype": "TCP", 00:10:46.560 "adrfam": "IPv4", 00:10:46.560 "traddr": "10.0.0.3", 00:10:46.560 "trsvcid": "4420" 00:10:46.560 }, 00:10:46.560 "peer_address": { 00:10:46.560 "trtype": "TCP", 00:10:46.560 "adrfam": "IPv4", 00:10:46.560 "traddr": "10.0.0.1", 00:10:46.560 "trsvcid": "46686" 00:10:46.560 }, 00:10:46.560 "auth": { 00:10:46.560 "state": "completed", 00:10:46.560 "digest": "sha256", 00:10:46.560 "dhgroup": "ffdhe6144" 00:10:46.560 } 00:10:46.560 } 00:10:46.560 ]' 00:10:46.560 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.818 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.077 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:47.077 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:48.016 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.016 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.017 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.285 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.852 00:10:48.852 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.852 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.852 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.110 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.111 { 00:10:49.111 "cntlid": 41, 00:10:49.111 "qid": 0, 00:10:49.111 "state": "enabled", 00:10:49.111 "thread": "nvmf_tgt_poll_group_000", 00:10:49.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:49.111 "listen_address": { 00:10:49.111 "trtype": "TCP", 00:10:49.111 "adrfam": "IPv4", 00:10:49.111 "traddr": "10.0.0.3", 00:10:49.111 "trsvcid": "4420" 00:10:49.111 }, 00:10:49.111 "peer_address": { 00:10:49.111 "trtype": "TCP", 00:10:49.111 "adrfam": "IPv4", 00:10:49.111 "traddr": "10.0.0.1", 00:10:49.111 "trsvcid": "46712" 00:10:49.111 }, 00:10:49.111 "auth": { 00:10:49.111 "state": "completed", 00:10:49.111 "digest": "sha256", 00:10:49.111 "dhgroup": "ffdhe8192" 00:10:49.111 } 00:10:49.111 } 00:10:49.111 ]' 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.111 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.677 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:49.677 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.244 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.503 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:51.070 00:10:51.070 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.070 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.070 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.328 { 00:10:51.328 "cntlid": 43, 00:10:51.328 "qid": 0, 00:10:51.328 "state": "enabled", 00:10:51.328 "thread": "nvmf_tgt_poll_group_000", 00:10:51.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:51.328 "listen_address": { 00:10:51.328 "trtype": "TCP", 00:10:51.328 "adrfam": "IPv4", 00:10:51.328 "traddr": "10.0.0.3", 00:10:51.328 "trsvcid": "4420" 00:10:51.328 }, 00:10:51.328 "peer_address": { 00:10:51.328 "trtype": "TCP", 00:10:51.328 "adrfam": "IPv4", 00:10:51.328 "traddr": "10.0.0.1", 00:10:51.328 "trsvcid": "42756" 00:10:51.328 }, 00:10:51.328 "auth": { 00:10:51.328 "state": "completed", 00:10:51.328 "digest": "sha256", 00:10:51.328 "dhgroup": "ffdhe8192" 00:10:51.328 } 00:10:51.328 } 00:10:51.328 ]' 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:51.328 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.587 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.587 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.587 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.846 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:51.846 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.412 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.672 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.240 00:10:53.240 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.240 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.240 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.498 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.498 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.498 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.498 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.758 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.758 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.758 { 00:10:53.758 "cntlid": 45, 00:10:53.758 "qid": 0, 00:10:53.758 "state": "enabled", 00:10:53.758 "thread": "nvmf_tgt_poll_group_000", 00:10:53.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:53.758 "listen_address": { 00:10:53.758 "trtype": "TCP", 00:10:53.758 "adrfam": "IPv4", 00:10:53.758 "traddr": "10.0.0.3", 00:10:53.758 "trsvcid": "4420" 00:10:53.758 }, 00:10:53.758 "peer_address": { 00:10:53.758 "trtype": "TCP", 00:10:53.758 "adrfam": "IPv4", 00:10:53.758 "traddr": "10.0.0.1", 00:10:53.758 "trsvcid": "42804" 00:10:53.758 }, 00:10:53.758 "auth": { 00:10:53.758 "state": "completed", 00:10:53.758 "digest": "sha256", 00:10:53.758 "dhgroup": "ffdhe8192" 00:10:53.758 } 00:10:53.758 } 00:10:53.758 ]' 00:10:53.758 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.758 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.758 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.758 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.758 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.758 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.758 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.758 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.017 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:54.017 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.585 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:54.844 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.411 00:10:55.411 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.411 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.411 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.979 { 00:10:55.979 "cntlid": 47, 00:10:55.979 "qid": 0, 00:10:55.979 "state": "enabled", 00:10:55.979 "thread": "nvmf_tgt_poll_group_000", 00:10:55.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:55.979 "listen_address": { 00:10:55.979 "trtype": "TCP", 00:10:55.979 "adrfam": "IPv4", 00:10:55.979 "traddr": "10.0.0.3", 00:10:55.979 "trsvcid": "4420" 00:10:55.979 }, 00:10:55.979 "peer_address": { 00:10:55.979 "trtype": "TCP", 00:10:55.979 "adrfam": "IPv4", 00:10:55.979 "traddr": "10.0.0.1", 00:10:55.979 "trsvcid": "42842" 00:10:55.979 }, 00:10:55.979 "auth": { 00:10:55.979 "state": "completed", 00:10:55.979 "digest": "sha256", 00:10:55.979 "dhgroup": "ffdhe8192" 00:10:55.979 } 00:10:55.979 } 00:10:55.979 ]' 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.979 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.238 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:56.238 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.174 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.434 00:10:57.434 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.434 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.434 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.004 { 00:10:58.004 "cntlid": 49, 00:10:58.004 "qid": 0, 00:10:58.004 "state": "enabled", 00:10:58.004 "thread": "nvmf_tgt_poll_group_000", 00:10:58.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:58.004 "listen_address": { 00:10:58.004 "trtype": "TCP", 00:10:58.004 "adrfam": "IPv4", 00:10:58.004 "traddr": "10.0.0.3", 00:10:58.004 "trsvcid": "4420" 00:10:58.004 }, 00:10:58.004 "peer_address": { 00:10:58.004 "trtype": "TCP", 00:10:58.004 "adrfam": "IPv4", 00:10:58.004 "traddr": "10.0.0.1", 00:10:58.004 "trsvcid": "42862" 00:10:58.004 }, 00:10:58.004 "auth": { 00:10:58.004 "state": "completed", 00:10:58.004 "digest": "sha384", 00:10:58.004 "dhgroup": "null" 00:10:58.004 } 00:10:58.004 } 00:10:58.004 ]' 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.004 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.262 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:58.262 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.198 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.456 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.715 00:10:59.716 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.716 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.716 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.974 { 00:10:59.974 "cntlid": 51, 00:10:59.974 "qid": 0, 00:10:59.974 "state": "enabled", 00:10:59.974 "thread": "nvmf_tgt_poll_group_000", 00:10:59.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:10:59.974 "listen_address": { 00:10:59.974 "trtype": "TCP", 00:10:59.974 "adrfam": "IPv4", 00:10:59.974 "traddr": "10.0.0.3", 00:10:59.974 "trsvcid": "4420" 00:10:59.974 }, 00:10:59.974 "peer_address": { 00:10:59.974 "trtype": "TCP", 00:10:59.974 "adrfam": "IPv4", 00:10:59.974 "traddr": "10.0.0.1", 00:10:59.974 "trsvcid": "46000" 00:10:59.974 }, 00:10:59.974 "auth": { 00:10:59.974 "state": "completed", 00:10:59.974 "digest": "sha384", 00:10:59.974 "dhgroup": "null" 00:10:59.974 } 00:10:59.974 } 00:10:59.974 ]' 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.974 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.233 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:00.233 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.233 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.233 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.233 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.491 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:00.491 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:01.057 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.057 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:01.057 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.057 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.315 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.316 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.316 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.316 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.316 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.884 00:11:01.884 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.884 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.884 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.144 { 00:11:02.144 "cntlid": 53, 00:11:02.144 "qid": 0, 00:11:02.144 "state": "enabled", 00:11:02.144 "thread": "nvmf_tgt_poll_group_000", 00:11:02.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:02.144 "listen_address": { 00:11:02.144 "trtype": "TCP", 00:11:02.144 "adrfam": "IPv4", 00:11:02.144 "traddr": "10.0.0.3", 00:11:02.144 "trsvcid": "4420" 00:11:02.144 }, 00:11:02.144 "peer_address": { 00:11:02.144 "trtype": "TCP", 00:11:02.144 "adrfam": "IPv4", 00:11:02.144 "traddr": "10.0.0.1", 00:11:02.144 "trsvcid": "46026" 00:11:02.144 }, 00:11:02.144 "auth": { 00:11:02.144 "state": "completed", 00:11:02.144 "digest": "sha384", 00:11:02.144 "dhgroup": "null" 00:11:02.144 } 00:11:02.144 } 00:11:02.144 ]' 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.144 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.403 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:02.403 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.339 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.597 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.855 00:11:03.855 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.855 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.855 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.114 { 00:11:04.114 "cntlid": 55, 00:11:04.114 "qid": 0, 00:11:04.114 "state": "enabled", 00:11:04.114 "thread": "nvmf_tgt_poll_group_000", 00:11:04.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:04.114 "listen_address": { 00:11:04.114 "trtype": "TCP", 00:11:04.114 "adrfam": "IPv4", 00:11:04.114 "traddr": "10.0.0.3", 00:11:04.114 "trsvcid": "4420" 00:11:04.114 }, 00:11:04.114 "peer_address": { 00:11:04.114 "trtype": "TCP", 00:11:04.114 "adrfam": "IPv4", 00:11:04.114 "traddr": "10.0.0.1", 00:11:04.114 "trsvcid": "46052" 00:11:04.114 }, 00:11:04.114 "auth": { 00:11:04.114 "state": "completed", 00:11:04.114 "digest": "sha384", 00:11:04.114 "dhgroup": "null" 00:11:04.114 } 00:11:04.114 } 00:11:04.114 ]' 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:04.114 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.373 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.373 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.373 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.630 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:04.630 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.198 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.457 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.024 00:11:06.024 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.024 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.024 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.284 { 00:11:06.284 "cntlid": 57, 00:11:06.284 "qid": 0, 00:11:06.284 "state": "enabled", 00:11:06.284 "thread": "nvmf_tgt_poll_group_000", 00:11:06.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:06.284 "listen_address": { 00:11:06.284 "trtype": "TCP", 00:11:06.284 "adrfam": "IPv4", 00:11:06.284 "traddr": "10.0.0.3", 00:11:06.284 "trsvcid": "4420" 00:11:06.284 }, 00:11:06.284 "peer_address": { 00:11:06.284 "trtype": "TCP", 00:11:06.284 "adrfam": "IPv4", 00:11:06.284 "traddr": "10.0.0.1", 00:11:06.284 "trsvcid": "46090" 00:11:06.284 }, 00:11:06.284 "auth": { 00:11:06.284 "state": "completed", 00:11:06.284 "digest": "sha384", 00:11:06.284 "dhgroup": "ffdhe2048" 00:11:06.284 } 00:11:06.284 } 00:11:06.284 ]' 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.543 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:06.544 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.478 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.045 00:11:08.045 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.045 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.045 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.303 { 00:11:08.303 "cntlid": 59, 00:11:08.303 "qid": 0, 00:11:08.303 "state": "enabled", 00:11:08.303 "thread": "nvmf_tgt_poll_group_000", 00:11:08.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:08.303 "listen_address": { 00:11:08.303 "trtype": "TCP", 00:11:08.303 "adrfam": "IPv4", 00:11:08.303 "traddr": "10.0.0.3", 00:11:08.303 "trsvcid": "4420" 00:11:08.303 }, 00:11:08.303 "peer_address": { 00:11:08.303 "trtype": "TCP", 00:11:08.303 "adrfam": "IPv4", 00:11:08.303 "traddr": "10.0.0.1", 00:11:08.303 "trsvcid": "46108" 00:11:08.303 }, 00:11:08.303 "auth": { 00:11:08.303 "state": "completed", 00:11:08.303 "digest": "sha384", 00:11:08.303 "dhgroup": "ffdhe2048" 00:11:08.303 } 00:11:08.303 } 00:11:08.303 ]' 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.303 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.562 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:08.562 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.533 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.100 00:11:10.100 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.100 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.100 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.359 { 00:11:10.359 "cntlid": 61, 00:11:10.359 "qid": 0, 00:11:10.359 "state": "enabled", 00:11:10.359 "thread": "nvmf_tgt_poll_group_000", 00:11:10.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:10.359 "listen_address": { 00:11:10.359 "trtype": "TCP", 00:11:10.359 "adrfam": "IPv4", 00:11:10.359 "traddr": "10.0.0.3", 00:11:10.359 "trsvcid": "4420" 00:11:10.359 }, 00:11:10.359 "peer_address": { 00:11:10.359 "trtype": "TCP", 00:11:10.359 "adrfam": "IPv4", 00:11:10.359 "traddr": "10.0.0.1", 00:11:10.359 "trsvcid": "36582" 00:11:10.359 }, 00:11:10.359 "auth": { 00:11:10.359 "state": "completed", 00:11:10.359 "digest": "sha384", 00:11:10.359 "dhgroup": "ffdhe2048" 00:11:10.359 } 00:11:10.359 } 00:11:10.359 ]' 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.359 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.618 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:10.618 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.552 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.810 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:11.810 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.810 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.810 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.811 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.069 00:11:12.069 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.069 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.069 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.328 { 00:11:12.328 "cntlid": 63, 00:11:12.328 "qid": 0, 00:11:12.328 "state": "enabled", 00:11:12.328 "thread": "nvmf_tgt_poll_group_000", 00:11:12.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:12.328 "listen_address": { 00:11:12.328 "trtype": "TCP", 00:11:12.328 "adrfam": "IPv4", 00:11:12.328 "traddr": "10.0.0.3", 00:11:12.328 "trsvcid": "4420" 00:11:12.328 }, 00:11:12.328 "peer_address": { 00:11:12.328 "trtype": "TCP", 00:11:12.328 "adrfam": "IPv4", 00:11:12.328 "traddr": "10.0.0.1", 00:11:12.328 "trsvcid": "36600" 00:11:12.328 }, 00:11:12.328 "auth": { 00:11:12.328 "state": "completed", 00:11:12.328 "digest": "sha384", 00:11:12.328 "dhgroup": "ffdhe2048" 00:11:12.328 } 00:11:12.328 } 00:11:12.328 ]' 00:11:12.328 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.587 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.846 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:12.846 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.414 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.980 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.238 00:11:14.238 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.238 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.238 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.496 { 00:11:14.496 "cntlid": 65, 00:11:14.496 "qid": 0, 00:11:14.496 "state": "enabled", 00:11:14.496 "thread": "nvmf_tgt_poll_group_000", 00:11:14.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:14.496 "listen_address": { 00:11:14.496 "trtype": "TCP", 00:11:14.496 "adrfam": "IPv4", 00:11:14.496 "traddr": "10.0.0.3", 00:11:14.496 "trsvcid": "4420" 00:11:14.496 }, 00:11:14.496 "peer_address": { 00:11:14.496 "trtype": "TCP", 00:11:14.496 "adrfam": "IPv4", 00:11:14.496 "traddr": "10.0.0.1", 00:11:14.496 "trsvcid": "36622" 00:11:14.496 }, 00:11:14.496 "auth": { 00:11:14.496 "state": "completed", 00:11:14.496 "digest": "sha384", 00:11:14.496 "dhgroup": "ffdhe3072" 00:11:14.496 } 00:11:14.496 } 00:11:14.496 ]' 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.496 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.753 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:14.753 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.685 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.685 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.942 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.942 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.942 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.942 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.200 00:11:16.200 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.200 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.200 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.484 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.484 { 00:11:16.484 "cntlid": 67, 00:11:16.484 "qid": 0, 00:11:16.485 "state": "enabled", 00:11:16.485 "thread": "nvmf_tgt_poll_group_000", 00:11:16.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:16.485 "listen_address": { 00:11:16.485 "trtype": "TCP", 00:11:16.485 "adrfam": "IPv4", 00:11:16.485 "traddr": "10.0.0.3", 00:11:16.485 "trsvcid": "4420" 00:11:16.485 }, 00:11:16.485 "peer_address": { 00:11:16.485 "trtype": "TCP", 00:11:16.485 "adrfam": "IPv4", 00:11:16.485 "traddr": "10.0.0.1", 00:11:16.485 "trsvcid": "36662" 00:11:16.485 }, 00:11:16.485 "auth": { 00:11:16.485 "state": "completed", 00:11:16.485 "digest": "sha384", 00:11:16.485 "dhgroup": "ffdhe3072" 00:11:16.485 } 00:11:16.485 } 00:11:16.485 ]' 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.485 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.048 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:17.048 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:17.612 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:17.869 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:17.869 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.870 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.436 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.436 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.436 { 00:11:18.436 "cntlid": 69, 00:11:18.436 "qid": 0, 00:11:18.436 "state": "enabled", 00:11:18.436 "thread": "nvmf_tgt_poll_group_000", 00:11:18.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:18.436 "listen_address": { 00:11:18.436 "trtype": "TCP", 00:11:18.436 "adrfam": "IPv4", 00:11:18.436 "traddr": "10.0.0.3", 00:11:18.436 "trsvcid": "4420" 00:11:18.436 }, 00:11:18.436 "peer_address": { 00:11:18.436 "trtype": "TCP", 00:11:18.436 "adrfam": "IPv4", 00:11:18.436 "traddr": "10.0.0.1", 00:11:18.436 "trsvcid": "36704" 00:11:18.436 }, 00:11:18.436 "auth": { 00:11:18.436 "state": "completed", 00:11:18.436 "digest": "sha384", 00:11:18.436 "dhgroup": "ffdhe3072" 00:11:18.436 } 00:11:18.436 } 00:11:18.436 ]' 00:11:18.694 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.694 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.694 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.694 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:18.694 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.694 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.694 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.694 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.951 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:18.952 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.884 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.885 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.451 00:11:20.451 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.452 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.452 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.709 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.709 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.709 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.709 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.709 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.709 { 00:11:20.709 "cntlid": 71, 00:11:20.709 "qid": 0, 00:11:20.709 "state": "enabled", 00:11:20.709 "thread": "nvmf_tgt_poll_group_000", 00:11:20.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:20.709 "listen_address": { 00:11:20.709 "trtype": "TCP", 00:11:20.709 "adrfam": "IPv4", 00:11:20.709 "traddr": "10.0.0.3", 00:11:20.709 "trsvcid": "4420" 00:11:20.709 }, 00:11:20.709 "peer_address": { 00:11:20.709 "trtype": "TCP", 00:11:20.709 "adrfam": "IPv4", 00:11:20.709 "traddr": "10.0.0.1", 00:11:20.709 "trsvcid": "48138" 00:11:20.709 }, 00:11:20.709 "auth": { 00:11:20.709 "state": "completed", 00:11:20.709 "digest": "sha384", 00:11:20.709 "dhgroup": "ffdhe3072" 00:11:20.709 } 00:11:20.709 } 00:11:20.709 ]' 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:20.709 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.973 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.973 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.973 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.973 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:20.973 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.912 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.479 00:11:22.479 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.479 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.479 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.737 { 00:11:22.737 "cntlid": 73, 00:11:22.737 "qid": 0, 00:11:22.737 "state": "enabled", 00:11:22.737 "thread": "nvmf_tgt_poll_group_000", 00:11:22.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:22.737 "listen_address": { 00:11:22.737 "trtype": "TCP", 00:11:22.737 "adrfam": "IPv4", 00:11:22.737 "traddr": "10.0.0.3", 00:11:22.737 "trsvcid": "4420" 00:11:22.737 }, 00:11:22.737 "peer_address": { 00:11:22.737 "trtype": "TCP", 00:11:22.737 "adrfam": "IPv4", 00:11:22.737 "traddr": "10.0.0.1", 00:11:22.737 "trsvcid": "48184" 00:11:22.737 }, 00:11:22.737 "auth": { 00:11:22.737 "state": "completed", 00:11:22.737 "digest": "sha384", 00:11:22.737 "dhgroup": "ffdhe4096" 00:11:22.737 } 00:11:22.737 } 00:11:22.737 ]' 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.737 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.996 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.996 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.996 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.254 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:23.254 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:23.820 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.079 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.338 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.338 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.338 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.338 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.597 00:11:24.597 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.597 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.597 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.855 { 00:11:24.855 "cntlid": 75, 00:11:24.855 "qid": 0, 00:11:24.855 "state": "enabled", 00:11:24.855 "thread": "nvmf_tgt_poll_group_000", 00:11:24.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:24.855 "listen_address": { 00:11:24.855 "trtype": "TCP", 00:11:24.855 "adrfam": "IPv4", 00:11:24.855 "traddr": "10.0.0.3", 00:11:24.855 "trsvcid": "4420" 00:11:24.855 }, 00:11:24.855 "peer_address": { 00:11:24.855 "trtype": "TCP", 00:11:24.855 "adrfam": "IPv4", 00:11:24.855 "traddr": "10.0.0.1", 00:11:24.855 "trsvcid": "48210" 00:11:24.855 }, 00:11:24.855 "auth": { 00:11:24.855 "state": "completed", 00:11:24.855 "digest": "sha384", 00:11:24.855 "dhgroup": "ffdhe4096" 00:11:24.855 } 00:11:24.855 } 00:11:24.855 ]' 00:11:24.855 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.114 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.372 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:25.372 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.307 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.566 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.824 00:11:26.824 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.824 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.824 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.083 { 00:11:27.083 "cntlid": 77, 00:11:27.083 "qid": 0, 00:11:27.083 "state": "enabled", 00:11:27.083 "thread": "nvmf_tgt_poll_group_000", 00:11:27.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:27.083 "listen_address": { 00:11:27.083 "trtype": "TCP", 00:11:27.083 "adrfam": "IPv4", 00:11:27.083 "traddr": "10.0.0.3", 00:11:27.083 "trsvcid": "4420" 00:11:27.083 }, 00:11:27.083 "peer_address": { 00:11:27.083 "trtype": "TCP", 00:11:27.083 "adrfam": "IPv4", 00:11:27.083 "traddr": "10.0.0.1", 00:11:27.083 "trsvcid": "48240" 00:11:27.083 }, 00:11:27.083 "auth": { 00:11:27.083 "state": "completed", 00:11:27.083 "digest": "sha384", 00:11:27.083 "dhgroup": "ffdhe4096" 00:11:27.083 } 00:11:27.083 } 00:11:27.083 ]' 00:11:27.083 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.342 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.601 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:27.601 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:28.167 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.167 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:28.167 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.167 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.425 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.425 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.425 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.425 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.682 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.940 00:11:28.940 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.940 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.940 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.200 { 00:11:29.200 "cntlid": 79, 00:11:29.200 "qid": 0, 00:11:29.200 "state": "enabled", 00:11:29.200 "thread": "nvmf_tgt_poll_group_000", 00:11:29.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:29.200 "listen_address": { 00:11:29.200 "trtype": "TCP", 00:11:29.200 "adrfam": "IPv4", 00:11:29.200 "traddr": "10.0.0.3", 00:11:29.200 "trsvcid": "4420" 00:11:29.200 }, 00:11:29.200 "peer_address": { 00:11:29.200 "trtype": "TCP", 00:11:29.200 "adrfam": "IPv4", 00:11:29.200 "traddr": "10.0.0.1", 00:11:29.200 "trsvcid": "33694" 00:11:29.200 }, 00:11:29.200 "auth": { 00:11:29.200 "state": "completed", 00:11:29.200 "digest": "sha384", 00:11:29.200 "dhgroup": "ffdhe4096" 00:11:29.200 } 00:11:29.200 } 00:11:29.200 ]' 00:11:29.200 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.459 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.718 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:29.718 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:30.356 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.614 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.181 00:11:31.181 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.181 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.181 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.441 { 00:11:31.441 "cntlid": 81, 00:11:31.441 "qid": 0, 00:11:31.441 "state": "enabled", 00:11:31.441 "thread": "nvmf_tgt_poll_group_000", 00:11:31.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:31.441 "listen_address": { 00:11:31.441 "trtype": "TCP", 00:11:31.441 "adrfam": "IPv4", 00:11:31.441 "traddr": "10.0.0.3", 00:11:31.441 "trsvcid": "4420" 00:11:31.441 }, 00:11:31.441 "peer_address": { 00:11:31.441 "trtype": "TCP", 00:11:31.441 "adrfam": "IPv4", 00:11:31.441 "traddr": "10.0.0.1", 00:11:31.441 "trsvcid": "33710" 00:11:31.441 }, 00:11:31.441 "auth": { 00:11:31.441 "state": "completed", 00:11:31.441 "digest": "sha384", 00:11:31.441 "dhgroup": "ffdhe6144" 00:11:31.441 } 00:11:31.441 } 00:11:31.441 ]' 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.441 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.442 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.701 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.701 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.701 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.701 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.960 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:31.960 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.527 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.786 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.787 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.352 00:11:33.352 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.352 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.352 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.611 { 00:11:33.611 "cntlid": 83, 00:11:33.611 "qid": 0, 00:11:33.611 "state": "enabled", 00:11:33.611 "thread": "nvmf_tgt_poll_group_000", 00:11:33.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:33.611 "listen_address": { 00:11:33.611 "trtype": "TCP", 00:11:33.611 "adrfam": "IPv4", 00:11:33.611 "traddr": "10.0.0.3", 00:11:33.611 "trsvcid": "4420" 00:11:33.611 }, 00:11:33.611 "peer_address": { 00:11:33.611 "trtype": "TCP", 00:11:33.611 "adrfam": "IPv4", 00:11:33.611 "traddr": "10.0.0.1", 00:11:33.611 "trsvcid": "33740" 00:11:33.611 }, 00:11:33.611 "auth": { 00:11:33.611 "state": "completed", 00:11:33.611 "digest": "sha384", 00:11:33.611 "dhgroup": "ffdhe6144" 00:11:33.611 } 00:11:33.611 } 00:11:33.611 ]' 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.611 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.611 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.611 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.611 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.869 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:33.869 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:34.436 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.436 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:34.436 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.436 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.694 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.694 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.694 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.694 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.952 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.212 00:11:35.470 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.470 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.470 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.729 { 00:11:35.729 "cntlid": 85, 00:11:35.729 "qid": 0, 00:11:35.729 "state": "enabled", 00:11:35.729 "thread": "nvmf_tgt_poll_group_000", 00:11:35.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:35.729 "listen_address": { 00:11:35.729 "trtype": "TCP", 00:11:35.729 "adrfam": "IPv4", 00:11:35.729 "traddr": "10.0.0.3", 00:11:35.729 "trsvcid": "4420" 00:11:35.729 }, 00:11:35.729 "peer_address": { 00:11:35.729 "trtype": "TCP", 00:11:35.729 "adrfam": "IPv4", 00:11:35.729 "traddr": "10.0.0.1", 00:11:35.729 "trsvcid": "33768" 00:11:35.729 }, 00:11:35.729 "auth": { 00:11:35.729 "state": "completed", 00:11:35.729 "digest": "sha384", 00:11:35.729 "dhgroup": "ffdhe6144" 00:11:35.729 } 00:11:35.729 } 00:11:35.729 ]' 00:11:35.729 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.729 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.989 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:35.989 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.925 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:37.183 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.184 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.441 00:11:37.441 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.441 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.441 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.701 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.701 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.701 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.701 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.011 { 00:11:38.011 "cntlid": 87, 00:11:38.011 "qid": 0, 00:11:38.011 "state": "enabled", 00:11:38.011 "thread": "nvmf_tgt_poll_group_000", 00:11:38.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:38.011 "listen_address": { 00:11:38.011 "trtype": "TCP", 00:11:38.011 "adrfam": "IPv4", 00:11:38.011 "traddr": "10.0.0.3", 00:11:38.011 "trsvcid": "4420" 00:11:38.011 }, 00:11:38.011 "peer_address": { 00:11:38.011 "trtype": "TCP", 00:11:38.011 "adrfam": "IPv4", 00:11:38.011 "traddr": "10.0.0.1", 00:11:38.011 "trsvcid": "33792" 00:11:38.011 }, 00:11:38.011 "auth": { 00:11:38.011 "state": "completed", 00:11:38.011 "digest": "sha384", 00:11:38.011 "dhgroup": "ffdhe6144" 00:11:38.011 } 00:11:38.011 } 00:11:38.011 ]' 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.011 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.269 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:38.269 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:38.833 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.399 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.964 00:11:39.964 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.964 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.964 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.221 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.221 { 00:11:40.221 "cntlid": 89, 00:11:40.221 "qid": 0, 00:11:40.221 "state": "enabled", 00:11:40.221 "thread": "nvmf_tgt_poll_group_000", 00:11:40.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:40.221 "listen_address": { 00:11:40.221 "trtype": "TCP", 00:11:40.221 "adrfam": "IPv4", 00:11:40.221 "traddr": "10.0.0.3", 00:11:40.221 "trsvcid": "4420" 00:11:40.221 }, 00:11:40.222 "peer_address": { 00:11:40.222 "trtype": "TCP", 00:11:40.222 "adrfam": "IPv4", 00:11:40.222 "traddr": "10.0.0.1", 00:11:40.222 "trsvcid": "44194" 00:11:40.222 }, 00:11:40.222 "auth": { 00:11:40.222 "state": "completed", 00:11:40.222 "digest": "sha384", 00:11:40.222 "dhgroup": "ffdhe8192" 00:11:40.222 } 00:11:40.222 } 00:11:40.222 ]' 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.222 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.492 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:40.492 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.470 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.403 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.403 { 00:11:42.403 "cntlid": 91, 00:11:42.403 "qid": 0, 00:11:42.403 "state": "enabled", 00:11:42.403 "thread": "nvmf_tgt_poll_group_000", 00:11:42.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:42.403 "listen_address": { 00:11:42.403 "trtype": "TCP", 00:11:42.403 "adrfam": "IPv4", 00:11:42.403 "traddr": "10.0.0.3", 00:11:42.403 "trsvcid": "4420" 00:11:42.403 }, 00:11:42.403 "peer_address": { 00:11:42.403 "trtype": "TCP", 00:11:42.403 "adrfam": "IPv4", 00:11:42.403 "traddr": "10.0.0.1", 00:11:42.403 "trsvcid": "44234" 00:11:42.403 }, 00:11:42.403 "auth": { 00:11:42.403 "state": "completed", 00:11:42.403 "digest": "sha384", 00:11:42.403 "dhgroup": "ffdhe8192" 00:11:42.403 } 00:11:42.403 } 00:11:42.403 ]' 00:11:42.403 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.660 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.918 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:42.918 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:43.852 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.852 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.111 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.111 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.111 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.111 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.679 00:11:44.679 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.679 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.679 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.939 { 00:11:44.939 "cntlid": 93, 00:11:44.939 "qid": 0, 00:11:44.939 "state": "enabled", 00:11:44.939 "thread": "nvmf_tgt_poll_group_000", 00:11:44.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:44.939 "listen_address": { 00:11:44.939 "trtype": "TCP", 00:11:44.939 "adrfam": "IPv4", 00:11:44.939 "traddr": "10.0.0.3", 00:11:44.939 "trsvcid": "4420" 00:11:44.939 }, 00:11:44.939 "peer_address": { 00:11:44.939 "trtype": "TCP", 00:11:44.939 "adrfam": "IPv4", 00:11:44.939 "traddr": "10.0.0.1", 00:11:44.939 "trsvcid": "44270" 00:11:44.939 }, 00:11:44.939 "auth": { 00:11:44.939 "state": "completed", 00:11:44.939 "digest": "sha384", 00:11:44.939 "dhgroup": "ffdhe8192" 00:11:44.939 } 00:11:44.939 } 00:11:44.939 ]' 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:44.939 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.198 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.198 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.198 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.457 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:45.457 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.024 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.289 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.856 00:11:46.856 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.856 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.856 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.114 { 00:11:47.114 "cntlid": 95, 00:11:47.114 "qid": 0, 00:11:47.114 "state": "enabled", 00:11:47.114 "thread": "nvmf_tgt_poll_group_000", 00:11:47.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:47.114 "listen_address": { 00:11:47.114 "trtype": "TCP", 00:11:47.114 "adrfam": "IPv4", 00:11:47.114 "traddr": "10.0.0.3", 00:11:47.114 "trsvcid": "4420" 00:11:47.114 }, 00:11:47.114 "peer_address": { 00:11:47.114 "trtype": "TCP", 00:11:47.114 "adrfam": "IPv4", 00:11:47.114 "traddr": "10.0.0.1", 00:11:47.114 "trsvcid": "44308" 00:11:47.114 }, 00:11:47.114 "auth": { 00:11:47.114 "state": "completed", 00:11:47.114 "digest": "sha384", 00:11:47.114 "dhgroup": "ffdhe8192" 00:11:47.114 } 00:11:47.114 } 00:11:47.114 ]' 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.114 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.372 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.372 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.372 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.372 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.372 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.630 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:47.630 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:48.195 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.762 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.020 00:11:49.020 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.021 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.021 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.279 { 00:11:49.279 "cntlid": 97, 00:11:49.279 "qid": 0, 00:11:49.279 "state": "enabled", 00:11:49.279 "thread": "nvmf_tgt_poll_group_000", 00:11:49.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:49.279 "listen_address": { 00:11:49.279 "trtype": "TCP", 00:11:49.279 "adrfam": "IPv4", 00:11:49.279 "traddr": "10.0.0.3", 00:11:49.279 "trsvcid": "4420" 00:11:49.279 }, 00:11:49.279 "peer_address": { 00:11:49.279 "trtype": "TCP", 00:11:49.279 "adrfam": "IPv4", 00:11:49.279 "traddr": "10.0.0.1", 00:11:49.279 "trsvcid": "35688" 00:11:49.279 }, 00:11:49.279 "auth": { 00:11:49.279 "state": "completed", 00:11:49.279 "digest": "sha512", 00:11:49.279 "dhgroup": "null" 00:11:49.279 } 00:11:49.279 } 00:11:49.279 ]' 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.279 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.537 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:49.537 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.501 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.764 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.764 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.764 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.764 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.022 00:11:51.022 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.022 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.022 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.279 { 00:11:51.279 "cntlid": 99, 00:11:51.279 "qid": 0, 00:11:51.279 "state": "enabled", 00:11:51.279 "thread": "nvmf_tgt_poll_group_000", 00:11:51.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:51.279 "listen_address": { 00:11:51.279 "trtype": "TCP", 00:11:51.279 "adrfam": "IPv4", 00:11:51.279 "traddr": "10.0.0.3", 00:11:51.279 "trsvcid": "4420" 00:11:51.279 }, 00:11:51.279 "peer_address": { 00:11:51.279 "trtype": "TCP", 00:11:51.279 "adrfam": "IPv4", 00:11:51.279 "traddr": "10.0.0.1", 00:11:51.279 "trsvcid": "35716" 00:11:51.279 }, 00:11:51.279 "auth": { 00:11:51.279 "state": "completed", 00:11:51.279 "digest": "sha512", 00:11:51.279 "dhgroup": "null" 00:11:51.279 } 00:11:51.279 } 00:11:51.279 ]' 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.279 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.537 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:51.537 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.470 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.037 00:11:53.037 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.037 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.037 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.295 { 00:11:53.295 "cntlid": 101, 00:11:53.295 "qid": 0, 00:11:53.295 "state": "enabled", 00:11:53.295 "thread": "nvmf_tgt_poll_group_000", 00:11:53.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:53.295 "listen_address": { 00:11:53.295 "trtype": "TCP", 00:11:53.295 "adrfam": "IPv4", 00:11:53.295 "traddr": "10.0.0.3", 00:11:53.295 "trsvcid": "4420" 00:11:53.295 }, 00:11:53.295 "peer_address": { 00:11:53.295 "trtype": "TCP", 00:11:53.295 "adrfam": "IPv4", 00:11:53.295 "traddr": "10.0.0.1", 00:11:53.295 "trsvcid": "35742" 00:11:53.295 }, 00:11:53.295 "auth": { 00:11:53.295 "state": "completed", 00:11:53.295 "digest": "sha512", 00:11:53.295 "dhgroup": "null" 00:11:53.295 } 00:11:53.295 } 00:11:53.295 ]' 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.295 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.554 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:53.554 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:11:54.119 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.119 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:54.119 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.119 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.377 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.377 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.377 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.377 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.636 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.894 00:11:54.894 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.894 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.894 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.153 { 00:11:55.153 "cntlid": 103, 00:11:55.153 "qid": 0, 00:11:55.153 "state": "enabled", 00:11:55.153 "thread": "nvmf_tgt_poll_group_000", 00:11:55.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:55.153 "listen_address": { 00:11:55.153 "trtype": "TCP", 00:11:55.153 "adrfam": "IPv4", 00:11:55.153 "traddr": "10.0.0.3", 00:11:55.153 "trsvcid": "4420" 00:11:55.153 }, 00:11:55.153 "peer_address": { 00:11:55.153 "trtype": "TCP", 00:11:55.153 "adrfam": "IPv4", 00:11:55.153 "traddr": "10.0.0.1", 00:11:55.153 "trsvcid": "35760" 00:11:55.153 }, 00:11:55.153 "auth": { 00:11:55.153 "state": "completed", 00:11:55.153 "digest": "sha512", 00:11:55.153 "dhgroup": "null" 00:11:55.153 } 00:11:55.153 } 00:11:55.153 ]' 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.153 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.411 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:55.411 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.411 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.411 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.411 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.669 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:55.669 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:56.236 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.494 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.753 00:11:56.753 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.753 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.753 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.321 { 00:11:57.321 "cntlid": 105, 00:11:57.321 "qid": 0, 00:11:57.321 "state": "enabled", 00:11:57.321 "thread": "nvmf_tgt_poll_group_000", 00:11:57.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:57.321 "listen_address": { 00:11:57.321 "trtype": "TCP", 00:11:57.321 "adrfam": "IPv4", 00:11:57.321 "traddr": "10.0.0.3", 00:11:57.321 "trsvcid": "4420" 00:11:57.321 }, 00:11:57.321 "peer_address": { 00:11:57.321 "trtype": "TCP", 00:11:57.321 "adrfam": "IPv4", 00:11:57.321 "traddr": "10.0.0.1", 00:11:57.321 "trsvcid": "35780" 00:11:57.321 }, 00:11:57.321 "auth": { 00:11:57.321 "state": "completed", 00:11:57.321 "digest": "sha512", 00:11:57.321 "dhgroup": "ffdhe2048" 00:11:57.321 } 00:11:57.321 } 00:11:57.321 ]' 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.321 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.579 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:57.579 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:58.145 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.403 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.968 00:11:58.968 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.968 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.968 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.226 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.227 { 00:11:59.227 "cntlid": 107, 00:11:59.227 "qid": 0, 00:11:59.227 "state": "enabled", 00:11:59.227 "thread": "nvmf_tgt_poll_group_000", 00:11:59.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:11:59.227 "listen_address": { 00:11:59.227 "trtype": "TCP", 00:11:59.227 "adrfam": "IPv4", 00:11:59.227 "traddr": "10.0.0.3", 00:11:59.227 "trsvcid": "4420" 00:11:59.227 }, 00:11:59.227 "peer_address": { 00:11:59.227 "trtype": "TCP", 00:11:59.227 "adrfam": "IPv4", 00:11:59.227 "traddr": "10.0.0.1", 00:11:59.227 "trsvcid": "49976" 00:11:59.227 }, 00:11:59.227 "auth": { 00:11:59.227 "state": "completed", 00:11:59.227 "digest": "sha512", 00:11:59.227 "dhgroup": "ffdhe2048" 00:11:59.227 } 00:11:59.227 } 00:11:59.227 ]' 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.227 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.485 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:11:59.485 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:00.437 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.438 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.008 00:12:01.008 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.008 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.008 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.267 { 00:12:01.267 "cntlid": 109, 00:12:01.267 "qid": 0, 00:12:01.267 "state": "enabled", 00:12:01.267 "thread": "nvmf_tgt_poll_group_000", 00:12:01.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:01.267 "listen_address": { 00:12:01.267 "trtype": "TCP", 00:12:01.267 "adrfam": "IPv4", 00:12:01.267 "traddr": "10.0.0.3", 00:12:01.267 "trsvcid": "4420" 00:12:01.267 }, 00:12:01.267 "peer_address": { 00:12:01.267 "trtype": "TCP", 00:12:01.267 "adrfam": "IPv4", 00:12:01.267 "traddr": "10.0.0.1", 00:12:01.267 "trsvcid": "50012" 00:12:01.267 }, 00:12:01.267 "auth": { 00:12:01.267 "state": "completed", 00:12:01.267 "digest": "sha512", 00:12:01.267 "dhgroup": "ffdhe2048" 00:12:01.267 } 00:12:01.267 } 00:12:01.267 ]' 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.267 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.525 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:01.525 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.461 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.729 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.729 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:02.729 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.729 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.987 00:12:02.987 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.987 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.987 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.244 { 00:12:03.244 "cntlid": 111, 00:12:03.244 "qid": 0, 00:12:03.244 "state": "enabled", 00:12:03.244 "thread": "nvmf_tgt_poll_group_000", 00:12:03.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:03.244 "listen_address": { 00:12:03.244 "trtype": "TCP", 00:12:03.244 "adrfam": "IPv4", 00:12:03.244 "traddr": "10.0.0.3", 00:12:03.244 "trsvcid": "4420" 00:12:03.244 }, 00:12:03.244 "peer_address": { 00:12:03.244 "trtype": "TCP", 00:12:03.244 "adrfam": "IPv4", 00:12:03.244 "traddr": "10.0.0.1", 00:12:03.244 "trsvcid": "50038" 00:12:03.244 }, 00:12:03.244 "auth": { 00:12:03.244 "state": "completed", 00:12:03.244 "digest": "sha512", 00:12:03.244 "dhgroup": "ffdhe2048" 00:12:03.244 } 00:12:03.244 } 00:12:03.244 ]' 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.244 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.502 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:03.502 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:04.068 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.635 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.893 00:12:04.893 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.893 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.893 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.152 { 00:12:05.152 "cntlid": 113, 00:12:05.152 "qid": 0, 00:12:05.152 "state": "enabled", 00:12:05.152 "thread": "nvmf_tgt_poll_group_000", 00:12:05.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:05.152 "listen_address": { 00:12:05.152 "trtype": "TCP", 00:12:05.152 "adrfam": "IPv4", 00:12:05.152 "traddr": "10.0.0.3", 00:12:05.152 "trsvcid": "4420" 00:12:05.152 }, 00:12:05.152 "peer_address": { 00:12:05.152 "trtype": "TCP", 00:12:05.152 "adrfam": "IPv4", 00:12:05.152 "traddr": "10.0.0.1", 00:12:05.152 "trsvcid": "50058" 00:12:05.152 }, 00:12:05.152 "auth": { 00:12:05.152 "state": "completed", 00:12:05.152 "digest": "sha512", 00:12:05.152 "dhgroup": "ffdhe3072" 00:12:05.152 } 00:12:05.152 } 00:12:05.152 ]' 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.152 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.410 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:05.410 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.347 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.914 00:12:06.914 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.914 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.914 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.171 { 00:12:07.171 "cntlid": 115, 00:12:07.171 "qid": 0, 00:12:07.171 "state": "enabled", 00:12:07.171 "thread": "nvmf_tgt_poll_group_000", 00:12:07.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:07.171 "listen_address": { 00:12:07.171 "trtype": "TCP", 00:12:07.171 "adrfam": "IPv4", 00:12:07.171 "traddr": "10.0.0.3", 00:12:07.171 "trsvcid": "4420" 00:12:07.171 }, 00:12:07.171 "peer_address": { 00:12:07.171 "trtype": "TCP", 00:12:07.171 "adrfam": "IPv4", 00:12:07.171 "traddr": "10.0.0.1", 00:12:07.171 "trsvcid": "50072" 00:12:07.171 }, 00:12:07.171 "auth": { 00:12:07.171 "state": "completed", 00:12:07.171 "digest": "sha512", 00:12:07.171 "dhgroup": "ffdhe3072" 00:12:07.171 } 00:12:07.171 } 00:12:07.171 ]' 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.171 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.739 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:07.739 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.305 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.562 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:08.562 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.562 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.562 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:08.562 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.563 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.821 00:12:08.821 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.821 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.821 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.386 { 00:12:09.386 "cntlid": 117, 00:12:09.386 "qid": 0, 00:12:09.386 "state": "enabled", 00:12:09.386 "thread": "nvmf_tgt_poll_group_000", 00:12:09.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:09.386 "listen_address": { 00:12:09.386 "trtype": "TCP", 00:12:09.386 "adrfam": "IPv4", 00:12:09.386 "traddr": "10.0.0.3", 00:12:09.386 "trsvcid": "4420" 00:12:09.386 }, 00:12:09.386 "peer_address": { 00:12:09.386 "trtype": "TCP", 00:12:09.386 "adrfam": "IPv4", 00:12:09.386 "traddr": "10.0.0.1", 00:12:09.386 "trsvcid": "51932" 00:12:09.386 }, 00:12:09.386 "auth": { 00:12:09.386 "state": "completed", 00:12:09.386 "digest": "sha512", 00:12:09.386 "dhgroup": "ffdhe3072" 00:12:09.386 } 00:12:09.386 } 00:12:09.386 ]' 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.386 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.954 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:09.954 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.522 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.892 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.893 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.164 00:12:11.164 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.164 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.165 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.424 { 00:12:11.424 "cntlid": 119, 00:12:11.424 "qid": 0, 00:12:11.424 "state": "enabled", 00:12:11.424 "thread": "nvmf_tgt_poll_group_000", 00:12:11.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:11.424 "listen_address": { 00:12:11.424 "trtype": "TCP", 00:12:11.424 "adrfam": "IPv4", 00:12:11.424 "traddr": "10.0.0.3", 00:12:11.424 "trsvcid": "4420" 00:12:11.424 }, 00:12:11.424 "peer_address": { 00:12:11.424 "trtype": "TCP", 00:12:11.424 "adrfam": "IPv4", 00:12:11.424 "traddr": "10.0.0.1", 00:12:11.424 "trsvcid": "51966" 00:12:11.424 }, 00:12:11.424 "auth": { 00:12:11.424 "state": "completed", 00:12:11.424 "digest": "sha512", 00:12:11.424 "dhgroup": "ffdhe3072" 00:12:11.424 } 00:12:11.424 } 00:12:11.424 ]' 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.424 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.682 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.682 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.682 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.682 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.682 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.939 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:11.939 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:12.504 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.762 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.329 00:12:13.329 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.329 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.329 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.588 { 00:12:13.588 "cntlid": 121, 00:12:13.588 "qid": 0, 00:12:13.588 "state": "enabled", 00:12:13.588 "thread": "nvmf_tgt_poll_group_000", 00:12:13.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:13.588 "listen_address": { 00:12:13.588 "trtype": "TCP", 00:12:13.588 "adrfam": "IPv4", 00:12:13.588 "traddr": "10.0.0.3", 00:12:13.588 "trsvcid": "4420" 00:12:13.588 }, 00:12:13.588 "peer_address": { 00:12:13.588 "trtype": "TCP", 00:12:13.588 "adrfam": "IPv4", 00:12:13.588 "traddr": "10.0.0.1", 00:12:13.588 "trsvcid": "51988" 00:12:13.588 }, 00:12:13.588 "auth": { 00:12:13.588 "state": "completed", 00:12:13.588 "digest": "sha512", 00:12:13.588 "dhgroup": "ffdhe4096" 00:12:13.588 } 00:12:13.588 } 00:12:13.588 ]' 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.588 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.157 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:14.157 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:14.724 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.725 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:14.725 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.725 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.725 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.725 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.725 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:14.725 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.983 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.551 00:12:15.551 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.551 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.551 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.810 { 00:12:15.810 "cntlid": 123, 00:12:15.810 "qid": 0, 00:12:15.810 "state": "enabled", 00:12:15.810 "thread": "nvmf_tgt_poll_group_000", 00:12:15.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:15.810 "listen_address": { 00:12:15.810 "trtype": "TCP", 00:12:15.810 "adrfam": "IPv4", 00:12:15.810 "traddr": "10.0.0.3", 00:12:15.810 "trsvcid": "4420" 00:12:15.810 }, 00:12:15.810 "peer_address": { 00:12:15.810 "trtype": "TCP", 00:12:15.810 "adrfam": "IPv4", 00:12:15.810 "traddr": "10.0.0.1", 00:12:15.810 "trsvcid": "52020" 00:12:15.810 }, 00:12:15.810 "auth": { 00:12:15.810 "state": "completed", 00:12:15.810 "digest": "sha512", 00:12:15.810 "dhgroup": "ffdhe4096" 00:12:15.810 } 00:12:15.810 } 00:12:15.810 ]' 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.810 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.069 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:16.069 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:16.635 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.202 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.203 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.203 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.461 00:12:17.461 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.461 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.461 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.720 { 00:12:17.720 "cntlid": 125, 00:12:17.720 "qid": 0, 00:12:17.720 "state": "enabled", 00:12:17.720 "thread": "nvmf_tgt_poll_group_000", 00:12:17.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:17.720 "listen_address": { 00:12:17.720 "trtype": "TCP", 00:12:17.720 "adrfam": "IPv4", 00:12:17.720 "traddr": "10.0.0.3", 00:12:17.720 "trsvcid": "4420" 00:12:17.720 }, 00:12:17.720 "peer_address": { 00:12:17.720 "trtype": "TCP", 00:12:17.720 "adrfam": "IPv4", 00:12:17.720 "traddr": "10.0.0.1", 00:12:17.720 "trsvcid": "52050" 00:12:17.720 }, 00:12:17.720 "auth": { 00:12:17.720 "state": "completed", 00:12:17.720 "digest": "sha512", 00:12:17.720 "dhgroup": "ffdhe4096" 00:12:17.720 } 00:12:17.720 } 00:12:17.720 ]' 00:12:17.720 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.720 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.979 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:17.979 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:18.915 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.174 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.433 00:12:19.433 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.433 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.433 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.691 { 00:12:19.691 "cntlid": 127, 00:12:19.691 "qid": 0, 00:12:19.691 "state": "enabled", 00:12:19.691 "thread": "nvmf_tgt_poll_group_000", 00:12:19.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:19.691 "listen_address": { 00:12:19.691 "trtype": "TCP", 00:12:19.691 "adrfam": "IPv4", 00:12:19.691 "traddr": "10.0.0.3", 00:12:19.691 "trsvcid": "4420" 00:12:19.691 }, 00:12:19.691 "peer_address": { 00:12:19.691 "trtype": "TCP", 00:12:19.691 "adrfam": "IPv4", 00:12:19.691 "traddr": "10.0.0.1", 00:12:19.691 "trsvcid": "38022" 00:12:19.691 }, 00:12:19.691 "auth": { 00:12:19.691 "state": "completed", 00:12:19.691 "digest": "sha512", 00:12:19.691 "dhgroup": "ffdhe4096" 00:12:19.691 } 00:12:19.691 } 00:12:19.691 ]' 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.691 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.950 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:19.950 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.950 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.950 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.950 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.210 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:20.210 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:20.777 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.777 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:20.777 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:20.778 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:21.035 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.036 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.618 00:12:21.618 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.618 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.618 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.889 { 00:12:21.889 "cntlid": 129, 00:12:21.889 "qid": 0, 00:12:21.889 "state": "enabled", 00:12:21.889 "thread": "nvmf_tgt_poll_group_000", 00:12:21.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:21.889 "listen_address": { 00:12:21.889 "trtype": "TCP", 00:12:21.889 "adrfam": "IPv4", 00:12:21.889 "traddr": "10.0.0.3", 00:12:21.889 "trsvcid": "4420" 00:12:21.889 }, 00:12:21.889 "peer_address": { 00:12:21.889 "trtype": "TCP", 00:12:21.889 "adrfam": "IPv4", 00:12:21.889 "traddr": "10.0.0.1", 00:12:21.889 "trsvcid": "38056" 00:12:21.889 }, 00:12:21.889 "auth": { 00:12:21.889 "state": "completed", 00:12:21.889 "digest": "sha512", 00:12:21.889 "dhgroup": "ffdhe6144" 00:12:21.889 } 00:12:21.889 } 00:12:21.889 ]' 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.889 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.458 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:22.458 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:23.025 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.025 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:23.025 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.026 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.026 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.026 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.026 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.026 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.284 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.851 00:12:23.851 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.851 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.851 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.111 { 00:12:24.111 "cntlid": 131, 00:12:24.111 "qid": 0, 00:12:24.111 "state": "enabled", 00:12:24.111 "thread": "nvmf_tgt_poll_group_000", 00:12:24.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:24.111 "listen_address": { 00:12:24.111 "trtype": "TCP", 00:12:24.111 "adrfam": "IPv4", 00:12:24.111 "traddr": "10.0.0.3", 00:12:24.111 "trsvcid": "4420" 00:12:24.111 }, 00:12:24.111 "peer_address": { 00:12:24.111 "trtype": "TCP", 00:12:24.111 "adrfam": "IPv4", 00:12:24.111 "traddr": "10.0.0.1", 00:12:24.111 "trsvcid": "38092" 00:12:24.111 }, 00:12:24.111 "auth": { 00:12:24.111 "state": "completed", 00:12:24.111 "digest": "sha512", 00:12:24.111 "dhgroup": "ffdhe6144" 00:12:24.111 } 00:12:24.111 } 00:12:24.111 ]' 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.111 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.370 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:24.370 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.306 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.874 00:12:25.874 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.874 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.874 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.134 { 00:12:26.134 "cntlid": 133, 00:12:26.134 "qid": 0, 00:12:26.134 "state": "enabled", 00:12:26.134 "thread": "nvmf_tgt_poll_group_000", 00:12:26.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:26.134 "listen_address": { 00:12:26.134 "trtype": "TCP", 00:12:26.134 "adrfam": "IPv4", 00:12:26.134 "traddr": "10.0.0.3", 00:12:26.134 "trsvcid": "4420" 00:12:26.134 }, 00:12:26.134 "peer_address": { 00:12:26.134 "trtype": "TCP", 00:12:26.134 "adrfam": "IPv4", 00:12:26.134 "traddr": "10.0.0.1", 00:12:26.134 "trsvcid": "38118" 00:12:26.134 }, 00:12:26.134 "auth": { 00:12:26.134 "state": "completed", 00:12:26.134 "digest": "sha512", 00:12:26.134 "dhgroup": "ffdhe6144" 00:12:26.134 } 00:12:26.134 } 00:12:26.134 ]' 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.134 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.393 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.393 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.393 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.393 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.393 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.652 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:26.652 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:27.219 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.478 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.045 00:12:28.045 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.045 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.045 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.304 { 00:12:28.304 "cntlid": 135, 00:12:28.304 "qid": 0, 00:12:28.304 "state": "enabled", 00:12:28.304 "thread": "nvmf_tgt_poll_group_000", 00:12:28.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:28.304 "listen_address": { 00:12:28.304 "trtype": "TCP", 00:12:28.304 "adrfam": "IPv4", 00:12:28.304 "traddr": "10.0.0.3", 00:12:28.304 "trsvcid": "4420" 00:12:28.304 }, 00:12:28.304 "peer_address": { 00:12:28.304 "trtype": "TCP", 00:12:28.304 "adrfam": "IPv4", 00:12:28.304 "traddr": "10.0.0.1", 00:12:28.304 "trsvcid": "38162" 00:12:28.304 }, 00:12:28.304 "auth": { 00:12:28.304 "state": "completed", 00:12:28.304 "digest": "sha512", 00:12:28.304 "dhgroup": "ffdhe6144" 00:12:28.304 } 00:12:28.304 } 00:12:28.304 ]' 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.304 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.305 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.305 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.305 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.305 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.305 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.874 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:28.874 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.441 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.699 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.700 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.267 00:12:30.267 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.267 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.267 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.525 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.525 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.525 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.525 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.784 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.784 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.784 { 00:12:30.784 "cntlid": 137, 00:12:30.784 "qid": 0, 00:12:30.784 "state": "enabled", 00:12:30.784 "thread": "nvmf_tgt_poll_group_000", 00:12:30.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:30.784 "listen_address": { 00:12:30.784 "trtype": "TCP", 00:12:30.784 "adrfam": "IPv4", 00:12:30.784 "traddr": "10.0.0.3", 00:12:30.784 "trsvcid": "4420" 00:12:30.784 }, 00:12:30.784 "peer_address": { 00:12:30.784 "trtype": "TCP", 00:12:30.784 "adrfam": "IPv4", 00:12:30.784 "traddr": "10.0.0.1", 00:12:30.784 "trsvcid": "49126" 00:12:30.784 }, 00:12:30.784 "auth": { 00:12:30.784 "state": "completed", 00:12:30.784 "digest": "sha512", 00:12:30.784 "dhgroup": "ffdhe8192" 00:12:30.784 } 00:12:30.784 } 00:12:30.784 ]' 00:12:30.784 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.784 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.042 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:31.042 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.977 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.237 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.238 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.810 00:12:32.810 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.810 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.810 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.069 { 00:12:33.069 "cntlid": 139, 00:12:33.069 "qid": 0, 00:12:33.069 "state": "enabled", 00:12:33.069 "thread": "nvmf_tgt_poll_group_000", 00:12:33.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:33.069 "listen_address": { 00:12:33.069 "trtype": "TCP", 00:12:33.069 "adrfam": "IPv4", 00:12:33.069 "traddr": "10.0.0.3", 00:12:33.069 "trsvcid": "4420" 00:12:33.069 }, 00:12:33.069 "peer_address": { 00:12:33.069 "trtype": "TCP", 00:12:33.069 "adrfam": "IPv4", 00:12:33.069 "traddr": "10.0.0.1", 00:12:33.069 "trsvcid": "49140" 00:12:33.069 }, 00:12:33.069 "auth": { 00:12:33.069 "state": "completed", 00:12:33.069 "digest": "sha512", 00:12:33.069 "dhgroup": "ffdhe8192" 00:12:33.069 } 00:12:33.069 } 00:12:33.069 ]' 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.069 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.328 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.328 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.328 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.328 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.328 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.586 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:33.586 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: --dhchap-ctrl-secret DHHC-1:02:NTEwOTNmM2NlZDU2NmZjY2Y5MzQxZWQzNWQwY2JlOGY4YjFiZWI2NWM3NmUzZDMyE/gDSw==: 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.154 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.413 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.350 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.350 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.609 { 00:12:35.609 "cntlid": 141, 00:12:35.609 "qid": 0, 00:12:35.609 "state": "enabled", 00:12:35.609 "thread": "nvmf_tgt_poll_group_000", 00:12:35.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:35.609 "listen_address": { 00:12:35.609 "trtype": "TCP", 00:12:35.609 "adrfam": "IPv4", 00:12:35.609 "traddr": "10.0.0.3", 00:12:35.609 "trsvcid": "4420" 00:12:35.609 }, 00:12:35.609 "peer_address": { 00:12:35.609 "trtype": "TCP", 00:12:35.609 "adrfam": "IPv4", 00:12:35.609 "traddr": "10.0.0.1", 00:12:35.609 "trsvcid": "49154" 00:12:35.609 }, 00:12:35.609 "auth": { 00:12:35.609 "state": "completed", 00:12:35.609 "digest": "sha512", 00:12:35.609 "dhgroup": "ffdhe8192" 00:12:35.609 } 00:12:35.609 } 00:12:35.609 ]' 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.609 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.867 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:35.867 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:01:MDM3OGFiNjkxMzZmYmM5YjhlZTFhMzc4OGQwMjljYTWg8mFp: 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:36.804 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:36.804 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:36.804 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.804 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.805 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.740 00:12:37.740 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.740 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.740 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.740 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.740 { 00:12:37.740 "cntlid": 143, 00:12:37.740 "qid": 0, 00:12:37.741 "state": "enabled", 00:12:37.741 "thread": "nvmf_tgt_poll_group_000", 00:12:37.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:37.741 "listen_address": { 00:12:37.741 "trtype": "TCP", 00:12:37.741 "adrfam": "IPv4", 00:12:37.741 "traddr": "10.0.0.3", 00:12:37.741 "trsvcid": "4420" 00:12:37.741 }, 00:12:37.741 "peer_address": { 00:12:37.741 "trtype": "TCP", 00:12:37.741 "adrfam": "IPv4", 00:12:37.741 "traddr": "10.0.0.1", 00:12:37.741 "trsvcid": "49188" 00:12:37.741 }, 00:12:37.741 "auth": { 00:12:37.741 "state": "completed", 00:12:37.741 "digest": "sha512", 00:12:37.741 "dhgroup": "ffdhe8192" 00:12:37.741 } 00:12:37.741 } 00:12:37.741 ]' 00:12:37.741 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.999 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.258 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:38.258 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.198 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.199 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.768 00:12:39.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.336 { 00:12:40.336 "cntlid": 145, 00:12:40.336 "qid": 0, 00:12:40.336 "state": "enabled", 00:12:40.336 "thread": "nvmf_tgt_poll_group_000", 00:12:40.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:40.336 "listen_address": { 00:12:40.336 "trtype": "TCP", 00:12:40.336 "adrfam": "IPv4", 00:12:40.336 "traddr": "10.0.0.3", 00:12:40.336 "trsvcid": "4420" 00:12:40.336 }, 00:12:40.336 "peer_address": { 00:12:40.336 "trtype": "TCP", 00:12:40.336 "adrfam": "IPv4", 00:12:40.336 "traddr": "10.0.0.1", 00:12:40.336 "trsvcid": "39676" 00:12:40.336 }, 00:12:40.336 "auth": { 00:12:40.336 "state": "completed", 00:12:40.336 "digest": "sha512", 00:12:40.336 "dhgroup": "ffdhe8192" 00:12:40.336 } 00:12:40.336 } 00:12:40.336 ]' 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.336 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.595 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:40.595 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:00:NTMyMGMyN2ZmOWEwYmY5MTU2ZmE2M2JmMWNkZjIyYjg0ZGI3NzVmZTg3YjY2MDJhYbXtRg==: --dhchap-ctrl-secret DHHC-1:03:MWVmODg1OTVmZjNmYTUwM2JiZjJlMGI2NzhhZjk3YzkyOWM4ZmNiMmQxZWYxMDE1ZDY0NmYzODkxOGMxYWVmMm+xLMk=: 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.161 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:41.162 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:41.729 request: 00:12:41.729 { 00:12:41.729 "name": "nvme0", 00:12:41.729 "trtype": "tcp", 00:12:41.729 "traddr": "10.0.0.3", 00:12:41.729 "adrfam": "ipv4", 00:12:41.729 "trsvcid": "4420", 00:12:41.729 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:41.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:41.729 "prchk_reftag": false, 00:12:41.729 "prchk_guard": false, 00:12:41.729 "hdgst": false, 00:12:41.729 "ddgst": false, 00:12:41.729 "dhchap_key": "key2", 00:12:41.729 "allow_unrecognized_csi": false, 00:12:41.729 "method": "bdev_nvme_attach_controller", 00:12:41.729 "req_id": 1 00:12:41.729 } 00:12:41.729 Got JSON-RPC error response 00:12:41.729 response: 00:12:41.729 { 00:12:41.729 "code": -5, 00:12:41.729 "message": "Input/output error" 00:12:41.729 } 00:12:41.729 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:41.729 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.729 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.729 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:41.730 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.298 request: 00:12:42.298 { 00:12:42.298 "name": "nvme0", 00:12:42.298 "trtype": "tcp", 00:12:42.298 "traddr": "10.0.0.3", 00:12:42.298 "adrfam": "ipv4", 00:12:42.298 "trsvcid": "4420", 00:12:42.298 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:42.298 "prchk_reftag": false, 00:12:42.298 "prchk_guard": false, 00:12:42.298 "hdgst": false, 00:12:42.298 "ddgst": false, 00:12:42.298 "dhchap_key": "key1", 00:12:42.298 "dhchap_ctrlr_key": "ckey2", 00:12:42.298 "allow_unrecognized_csi": false, 00:12:42.298 "method": "bdev_nvme_attach_controller", 00:12:42.298 "req_id": 1 00:12:42.298 } 00:12:42.298 Got JSON-RPC error response 00:12:42.298 response: 00:12:42.298 { 00:12:42.298 "code": -5, 00:12:42.298 "message": "Input/output error" 00:12:42.298 } 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.298 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.867 request: 00:12:42.867 { 00:12:42.867 "name": "nvme0", 00:12:42.867 "trtype": "tcp", 00:12:42.867 "traddr": "10.0.0.3", 00:12:42.867 "adrfam": "ipv4", 00:12:42.867 "trsvcid": "4420", 00:12:42.867 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:42.867 "prchk_reftag": false, 00:12:42.867 "prchk_guard": false, 00:12:42.867 "hdgst": false, 00:12:42.867 "ddgst": false, 00:12:42.867 "dhchap_key": "key1", 00:12:42.867 "dhchap_ctrlr_key": "ckey1", 00:12:42.867 "allow_unrecognized_csi": false, 00:12:42.867 "method": "bdev_nvme_attach_controller", 00:12:42.867 "req_id": 1 00:12:42.867 } 00:12:42.867 Got JSON-RPC error response 00:12:42.867 response: 00:12:42.867 { 00:12:42.867 "code": -5, 00:12:42.867 "message": "Input/output error" 00:12:42.867 } 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67036 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67036 ']' 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67036 00:12:42.867 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67036 00:12:42.868 killing process with pid 67036 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67036' 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67036 00:12:42.868 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67036 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70091 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70091 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70091 ']' 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.127 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70091 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70091 ']' 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.132 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.391 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.391 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:44.391 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:44.391 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.391 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.391 null0 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vAE 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.KY0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KY0 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.p39 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Yad ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yad 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gfT 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7OW ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7OW 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rdc 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.651 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.588 nvme0n1 00:12:45.588 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.588 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.588 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.847 { 00:12:45.847 "cntlid": 1, 00:12:45.847 "qid": 0, 00:12:45.847 "state": "enabled", 00:12:45.847 "thread": "nvmf_tgt_poll_group_000", 00:12:45.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:45.847 "listen_address": { 00:12:45.847 "trtype": "TCP", 00:12:45.847 "adrfam": "IPv4", 00:12:45.847 "traddr": "10.0.0.3", 00:12:45.847 "trsvcid": "4420" 00:12:45.847 }, 00:12:45.847 "peer_address": { 00:12:45.847 "trtype": "TCP", 00:12:45.847 "adrfam": "IPv4", 00:12:45.847 "traddr": "10.0.0.1", 00:12:45.847 "trsvcid": "39736" 00:12:45.847 }, 00:12:45.847 "auth": { 00:12:45.847 "state": "completed", 00:12:45.847 "digest": "sha512", 00:12:45.847 "dhgroup": "ffdhe8192" 00:12:45.847 } 00:12:45.847 } 00:12:45.847 ]' 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.847 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.106 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.106 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.106 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.365 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:46.365 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key3 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:46.933 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.193 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.453 request: 00:12:47.453 { 00:12:47.453 "name": "nvme0", 00:12:47.453 "trtype": "tcp", 00:12:47.453 "traddr": "10.0.0.3", 00:12:47.453 "adrfam": "ipv4", 00:12:47.453 "trsvcid": "4420", 00:12:47.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:47.453 "prchk_reftag": false, 00:12:47.453 "prchk_guard": false, 00:12:47.453 "hdgst": false, 00:12:47.453 "ddgst": false, 00:12:47.453 "dhchap_key": "key3", 00:12:47.453 "allow_unrecognized_csi": false, 00:12:47.453 "method": "bdev_nvme_attach_controller", 00:12:47.453 "req_id": 1 00:12:47.453 } 00:12:47.453 Got JSON-RPC error response 00:12:47.453 response: 00:12:47.453 { 00:12:47.453 "code": -5, 00:12:47.453 "message": "Input/output error" 00:12:47.453 } 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:47.453 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.714 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.973 request: 00:12:47.973 { 00:12:47.973 "name": "nvme0", 00:12:47.973 "trtype": "tcp", 00:12:47.973 "traddr": "10.0.0.3", 00:12:47.973 "adrfam": "ipv4", 00:12:47.973 "trsvcid": "4420", 00:12:47.973 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:47.973 "prchk_reftag": false, 00:12:47.973 "prchk_guard": false, 00:12:47.973 "hdgst": false, 00:12:47.973 "ddgst": false, 00:12:47.973 "dhchap_key": "key3", 00:12:47.973 "allow_unrecognized_csi": false, 00:12:47.973 "method": "bdev_nvme_attach_controller", 00:12:47.973 "req_id": 1 00:12:47.973 } 00:12:47.973 Got JSON-RPC error response 00:12:47.973 response: 00:12:47.973 { 00:12:47.973 "code": -5, 00:12:47.973 "message": "Input/output error" 00:12:47.973 } 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:47.973 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.233 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.801 request: 00:12:48.801 { 00:12:48.801 "name": "nvme0", 00:12:48.801 "trtype": "tcp", 00:12:48.801 "traddr": "10.0.0.3", 00:12:48.801 "adrfam": "ipv4", 00:12:48.801 "trsvcid": "4420", 00:12:48.801 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:48.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:48.801 "prchk_reftag": false, 00:12:48.801 "prchk_guard": false, 00:12:48.801 "hdgst": false, 00:12:48.801 "ddgst": false, 00:12:48.801 "dhchap_key": "key0", 00:12:48.801 "dhchap_ctrlr_key": "key1", 00:12:48.801 "allow_unrecognized_csi": false, 00:12:48.801 "method": "bdev_nvme_attach_controller", 00:12:48.801 "req_id": 1 00:12:48.801 } 00:12:48.801 Got JSON-RPC error response 00:12:48.801 response: 00:12:48.801 { 00:12:48.801 "code": -5, 00:12:48.801 "message": "Input/output error" 00:12:48.801 } 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:48.801 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:49.059 nvme0n1 00:12:49.059 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:49.059 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.059 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:49.319 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.319 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.319 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:49.578 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:50.514 nvme0n1 00:12:50.514 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:50.514 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:50.514 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:50.774 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.033 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.033 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:51.033 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid 560f6fb4-1392-4f8a-a310-a32d17cc4390 -l 0 --dhchap-secret DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: --dhchap-ctrl-secret DHHC-1:03:ZmUzYzEyNjgzNjFjODIwM2NiYzhjNzI3NzE5ZDI0YjdiZTAxMDQ0YTMyMzJjOGIxN2QyNmZhM2FhNGJkZTE4ZJAe1gY=: 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.601 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:51.860 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:52.429 request: 00:12:52.429 { 00:12:52.429 "name": "nvme0", 00:12:52.429 "trtype": "tcp", 00:12:52.429 "traddr": "10.0.0.3", 00:12:52.429 "adrfam": "ipv4", 00:12:52.429 "trsvcid": "4420", 00:12:52.429 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:52.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390", 00:12:52.429 "prchk_reftag": false, 00:12:52.429 "prchk_guard": false, 00:12:52.429 "hdgst": false, 00:12:52.429 "ddgst": false, 00:12:52.429 "dhchap_key": "key1", 00:12:52.429 "allow_unrecognized_csi": false, 00:12:52.429 "method": "bdev_nvme_attach_controller", 00:12:52.429 "req_id": 1 00:12:52.429 } 00:12:52.429 Got JSON-RPC error response 00:12:52.429 response: 00:12:52.429 { 00:12:52.429 "code": -5, 00:12:52.429 "message": "Input/output error" 00:12:52.429 } 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.429 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:53.365 nvme0n1 00:12:53.365 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:53.365 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.365 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:53.624 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.624 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.624 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:53.884 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:54.451 nvme0n1 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.451 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: '' 2s 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: ]] 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGY5ZjdlYTJlNDkzZmVkOWFiMDkwNWQwMDZmNTQxYWIqEP97: 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:54.711 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: 2s 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: ]] 00:12:57.298 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjQ2ZjFhZWJlM2JmYTEwYzJkMWYxZThkM2Q2MzkxMDJhNThiMDgxZDc0YzIxYTZl0ln8ZQ==: 00:12:57.299 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:57.299 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.203 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:59.204 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:59.204 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:59.770 nvme0n1 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:59.770 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:00.705 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:00.705 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.705 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:00.705 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:00.963 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:00.963 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:00.963 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:01.221 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:01.787 request: 00:13:01.787 { 00:13:01.787 "name": "nvme0", 00:13:01.787 "dhchap_key": "key1", 00:13:01.787 "dhchap_ctrlr_key": "key3", 00:13:01.787 "method": "bdev_nvme_set_keys", 00:13:01.787 "req_id": 1 00:13:01.787 } 00:13:01.787 Got JSON-RPC error response 00:13:01.787 response: 00:13:01.787 { 00:13:01.787 "code": -13, 00:13:01.787 "message": "Permission denied" 00:13:01.787 } 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.787 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:02.045 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:02.045 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:02.979 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:02.980 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.980 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:03.238 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:04.174 nvme0n1 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:04.174 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:04.741 request: 00:13:04.741 { 00:13:04.741 "name": "nvme0", 00:13:04.741 "dhchap_key": "key2", 00:13:04.741 "dhchap_ctrlr_key": "key0", 00:13:04.741 "method": "bdev_nvme_set_keys", 00:13:04.741 "req_id": 1 00:13:04.741 } 00:13:04.741 Got JSON-RPC error response 00:13:04.741 response: 00:13:04.741 { 00:13:04.741 "code": -13, 00:13:04.741 "message": "Permission denied" 00:13:04.741 } 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.741 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:05.000 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:05.000 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67068 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67068 ']' 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67068 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67068 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:06.376 killing process with pid 67068 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67068' 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67068 00:13:06.376 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67068 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.944 rmmod nvme_tcp 00:13:06.944 rmmod nvme_fabrics 00:13:06.944 rmmod nvme_keyring 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70091 ']' 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70091 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70091 ']' 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70091 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70091 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70091' 00:13:06.944 killing process with pid 70091 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70091 00:13:06.944 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70091 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:07.202 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:07.203 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vAE /tmp/spdk.key-sha256.p39 /tmp/spdk.key-sha384.gfT /tmp/spdk.key-sha512.Rdc /tmp/spdk.key-sha512.KY0 /tmp/spdk.key-sha384.Yad /tmp/spdk.key-sha256.7OW '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:07.462 00:13:07.462 real 3m7.977s 00:13:07.462 user 7m28.649s 00:13:07.462 sys 0m29.444s 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.462 ************************************ 00:13:07.462 END TEST nvmf_auth_target 00:13:07.462 ************************************ 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.462 ************************************ 00:13:07.462 START TEST nvmf_bdevio_no_huge 00:13:07.462 ************************************ 00:13:07.462 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:07.722 * Looking for test storage... 00:13:07.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:07.722 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.722 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.722 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.722 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.722 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.722 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.723 --rc genhtml_branch_coverage=1 00:13:07.723 --rc genhtml_function_coverage=1 00:13:07.723 --rc genhtml_legend=1 00:13:07.723 --rc geninfo_all_blocks=1 00:13:07.723 --rc geninfo_unexecuted_blocks=1 00:13:07.723 00:13:07.723 ' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.723 --rc genhtml_branch_coverage=1 00:13:07.723 --rc genhtml_function_coverage=1 00:13:07.723 --rc genhtml_legend=1 00:13:07.723 --rc geninfo_all_blocks=1 00:13:07.723 --rc geninfo_unexecuted_blocks=1 00:13:07.723 00:13:07.723 ' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.723 --rc genhtml_branch_coverage=1 00:13:07.723 --rc genhtml_function_coverage=1 00:13:07.723 --rc genhtml_legend=1 00:13:07.723 --rc geninfo_all_blocks=1 00:13:07.723 --rc geninfo_unexecuted_blocks=1 00:13:07.723 00:13:07.723 ' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.723 --rc genhtml_branch_coverage=1 00:13:07.723 --rc genhtml_function_coverage=1 00:13:07.723 --rc genhtml_legend=1 00:13:07.723 --rc geninfo_all_blocks=1 00:13:07.723 --rc geninfo_unexecuted_blocks=1 00:13:07.723 00:13:07.723 ' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.723 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:07.724 Cannot find device "nvmf_init_br" 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:07.724 Cannot find device "nvmf_init_br2" 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:07.724 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:07.983 Cannot find device "nvmf_tgt_br" 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.983 Cannot find device "nvmf_tgt_br2" 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:07.983 Cannot find device "nvmf_init_br" 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:07.983 Cannot find device "nvmf_init_br2" 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:07.983 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:07.983 Cannot find device "nvmf_tgt_br" 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:07.984 Cannot find device "nvmf_tgt_br2" 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:07.984 Cannot find device "nvmf_br" 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:07.984 Cannot find device "nvmf_init_if" 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:07.984 Cannot find device "nvmf_init_if2" 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.984 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:08.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:08.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:13:08.243 00:13:08.243 --- 10.0.0.3 ping statistics --- 00:13:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.243 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:08.243 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:08.243 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:13:08.243 00:13:08.243 --- 10.0.0.4 ping statistics --- 00:13:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.243 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:08.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:08.243 00:13:08.243 --- 10.0.0.1 ping statistics --- 00:13:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.243 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:08.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:13:08.243 00:13:08.243 --- 10.0.0.2 ping statistics --- 00:13:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.243 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70732 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70732 00:13:08.243 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70732 ']' 00:13:08.244 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.244 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.244 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.244 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.244 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:08.244 [2024-11-26 19:21:06.600952] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:08.244 [2024-11-26 19:21:06.601060] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:08.502 [2024-11-26 19:21:06.768660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.502 [2024-11-26 19:21:06.849432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.502 [2024-11-26 19:21:06.849505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.502 [2024-11-26 19:21:06.849530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.502 [2024-11-26 19:21:06.849540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.502 [2024-11-26 19:21:06.849549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.502 [2024-11-26 19:21:06.850559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:08.502 [2024-11-26 19:21:06.850726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:08.502 [2024-11-26 19:21:06.850861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:08.502 [2024-11-26 19:21:06.850867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.502 [2024-11-26 19:21:06.857361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.445 [2024-11-26 19:21:07.674872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.445 Malloc0 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.445 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.446 [2024-11-26 19:21:07.715086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:09.446 { 00:13:09.446 "params": { 00:13:09.446 "name": "Nvme$subsystem", 00:13:09.446 "trtype": "$TEST_TRANSPORT", 00:13:09.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:09.446 "adrfam": "ipv4", 00:13:09.446 "trsvcid": "$NVMF_PORT", 00:13:09.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:09.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:09.446 "hdgst": ${hdgst:-false}, 00:13:09.446 "ddgst": ${ddgst:-false} 00:13:09.446 }, 00:13:09.446 "method": "bdev_nvme_attach_controller" 00:13:09.446 } 00:13:09.446 EOF 00:13:09.446 )") 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:09.446 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:09.446 "params": { 00:13:09.446 "name": "Nvme1", 00:13:09.446 "trtype": "tcp", 00:13:09.446 "traddr": "10.0.0.3", 00:13:09.446 "adrfam": "ipv4", 00:13:09.446 "trsvcid": "4420", 00:13:09.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.446 "hdgst": false, 00:13:09.446 "ddgst": false 00:13:09.446 }, 00:13:09.446 "method": "bdev_nvme_attach_controller" 00:13:09.446 }' 00:13:09.446 [2024-11-26 19:21:07.766076] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:09.446 [2024-11-26 19:21:07.766156] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70769 ] 00:13:09.718 [2024-11-26 19:21:07.920088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.718 [2024-11-26 19:21:08.001622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.718 [2024-11-26 19:21:08.001764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.718 [2024-11-26 19:21:08.001770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.718 [2024-11-26 19:21:08.015881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.976 I/O targets: 00:13:09.976 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:09.976 00:13:09.976 00:13:09.976 CUnit - A unit testing framework for C - Version 2.1-3 00:13:09.976 http://cunit.sourceforge.net/ 00:13:09.976 00:13:09.976 00:13:09.976 Suite: bdevio tests on: Nvme1n1 00:13:09.976 Test: blockdev write read block ...passed 00:13:09.976 Test: blockdev write zeroes read block ...passed 00:13:09.976 Test: blockdev write zeroes read no split ...passed 00:13:09.976 Test: blockdev write zeroes read split ...passed 00:13:09.976 Test: blockdev write zeroes read split partial ...passed 00:13:09.976 Test: blockdev reset ...[2024-11-26 19:21:08.259973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:09.976 [2024-11-26 19:21:08.260082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629320 (9): Bad file descriptor 00:13:09.976 [2024-11-26 19:21:08.277161] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:09.976 passed 00:13:09.976 Test: blockdev write read 8 blocks ...passed 00:13:09.976 Test: blockdev write read size > 128k ...passed 00:13:09.976 Test: blockdev write read invalid size ...passed 00:13:09.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.976 Test: blockdev write read max offset ...passed 00:13:09.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.976 Test: blockdev writev readv 8 blocks ...passed 00:13:09.976 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.976 Test: blockdev writev readv block ...passed 00:13:09.976 Test: blockdev writev readv size > 128k ...passed 00:13:09.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.976 Test: blockdev comparev and writev ...[2024-11-26 19:21:08.285612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.285676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.285743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.286154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.286189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.286217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.286234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.286628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.286670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.286699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.286717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.287094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.287163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:09.976 [2024-11-26 19:21:08.287180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:09.976 passed 00:13:09.976 Test: blockdev nvme passthru rw ...passed 00:13:09.976 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:21:08.288443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:09.976 [2024-11-26 19:21:08.288489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.288637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:09.976 [2024-11-26 19:21:08.288665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.288797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:09.976 [2024-11-26 19:21:08.288831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:09.976 [2024-11-26 19:21:08.288986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:09.976 [2024-11-26 19:21:08.289019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:09.976 passed 00:13:09.976 Test: blockdev nvme admin passthru ...passed 00:13:09.976 Test: blockdev copy ...passed 00:13:09.976 00:13:09.976 Run Summary: Type Total Ran Passed Failed Inactive 00:13:09.976 suites 1 1 n/a 0 0 00:13:09.976 tests 23 23 23 0 0 00:13:09.976 asserts 152 152 152 0 n/a 00:13:09.976 00:13:09.976 Elapsed time = 0.162 seconds 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.235 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.494 rmmod nvme_tcp 00:13:10.494 rmmod nvme_fabrics 00:13:10.494 rmmod nvme_keyring 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70732 ']' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70732 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70732 ']' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70732 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70732 00:13:10.494 killing process with pid 70732 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70732' 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70732 00:13:10.494 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70732 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:10.752 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.753 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.753 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.753 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:10.753 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:10.753 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:11.012 00:13:11.012 real 0m3.509s 00:13:11.012 user 0m10.657s 00:13:11.012 sys 0m1.422s 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.012 ************************************ 00:13:11.012 END TEST nvmf_bdevio_no_huge 00:13:11.012 ************************************ 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.012 ************************************ 00:13:11.012 START TEST nvmf_tls 00:13:11.012 ************************************ 00:13:11.012 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:11.272 * Looking for test storage... 00:13:11.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.272 --rc genhtml_branch_coverage=1 00:13:11.272 --rc genhtml_function_coverage=1 00:13:11.272 --rc genhtml_legend=1 00:13:11.272 --rc geninfo_all_blocks=1 00:13:11.272 --rc geninfo_unexecuted_blocks=1 00:13:11.272 00:13:11.272 ' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.272 --rc genhtml_branch_coverage=1 00:13:11.272 --rc genhtml_function_coverage=1 00:13:11.272 --rc genhtml_legend=1 00:13:11.272 --rc geninfo_all_blocks=1 00:13:11.272 --rc geninfo_unexecuted_blocks=1 00:13:11.272 00:13:11.272 ' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.272 --rc genhtml_branch_coverage=1 00:13:11.272 --rc genhtml_function_coverage=1 00:13:11.272 --rc genhtml_legend=1 00:13:11.272 --rc geninfo_all_blocks=1 00:13:11.272 --rc geninfo_unexecuted_blocks=1 00:13:11.272 00:13:11.272 ' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.272 --rc genhtml_branch_coverage=1 00:13:11.272 --rc genhtml_function_coverage=1 00:13:11.272 --rc genhtml_legend=1 00:13:11.272 --rc geninfo_all_blocks=1 00:13:11.272 --rc geninfo_unexecuted_blocks=1 00:13:11.272 00:13:11.272 ' 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.272 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.273 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:11.273 Cannot find device "nvmf_init_br" 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:11.273 Cannot find device "nvmf_init_br2" 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:11.273 Cannot find device "nvmf_tgt_br" 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.273 Cannot find device "nvmf_tgt_br2" 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:11.273 Cannot find device "nvmf_init_br" 00:13:11.273 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:11.532 Cannot find device "nvmf_init_br2" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:11.532 Cannot find device "nvmf_tgt_br" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:11.532 Cannot find device "nvmf_tgt_br2" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:11.532 Cannot find device "nvmf_br" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:11.532 Cannot find device "nvmf_init_if" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:11.532 Cannot find device "nvmf_init_if2" 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:11.532 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:11.533 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:11.792 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:11.792 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:11.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:11.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:11.792 00:13:11.792 --- 10.0.0.3 ping statistics --- 00:13:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.792 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:11.792 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:11.792 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:11.792 00:13:11.792 --- 10.0.0.4 ping statistics --- 00:13:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.792 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:11.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:11.792 00:13:11.792 --- 10.0.0.1 ping statistics --- 00:13:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.792 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:11.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:13:11.792 00:13:11.792 --- 10.0.0.2 ping statistics --- 00:13:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.792 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71001 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71001 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71001 ']' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.792 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.792 [2024-11-26 19:21:10.125747] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:11.792 [2024-11-26 19:21:10.125854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.051 [2024-11-26 19:21:10.284537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.051 [2024-11-26 19:21:10.347891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.051 [2024-11-26 19:21:10.347968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.051 [2024-11-26 19:21:10.347982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.051 [2024-11-26 19:21:10.347993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.051 [2024-11-26 19:21:10.348002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.051 [2024-11-26 19:21:10.348569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:12.051 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:12.310 true 00:13:12.310 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:12.310 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:12.568 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:12.568 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:12.568 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:12.827 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:12.827 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:13.394 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:13.394 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:13.394 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:13.394 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:13.394 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:13.653 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:13.653 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:13.653 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:13.653 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:13.911 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:13.911 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:13.911 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:14.170 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:14.170 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.428 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:14.428 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:14.428 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:14.686 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.686 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3ZpbYPyFn6 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.nffhvXC6cN 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3ZpbYPyFn6 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.nffhvXC6cN 00:13:14.944 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:15.202 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:15.460 [2024-11-26 19:21:13.756001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.460 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3ZpbYPyFn6 00:13:15.460 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3ZpbYPyFn6 00:13:15.460 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:15.719 [2024-11-26 19:21:14.072011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.719 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:15.978 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:16.237 [2024-11-26 19:21:14.576134] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.237 [2024-11-26 19:21:14.576375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:16.237 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:16.495 malloc0 00:13:16.495 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:16.753 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3ZpbYPyFn6 00:13:17.011 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:17.270 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3ZpbYPyFn6 00:13:27.254 Initializing NVMe Controllers 00:13:27.254 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.254 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:27.254 Initialization complete. Launching workers. 00:13:27.254 ======================================================== 00:13:27.254 Latency(us) 00:13:27.254 Device Information : IOPS MiB/s Average min max 00:13:27.254 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10894.56 42.56 5875.83 879.85 9072.51 00:13:27.254 ======================================================== 00:13:27.254 Total : 10894.56 42.56 5875.83 879.85 9072.51 00:13:27.254 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ZpbYPyFn6 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3ZpbYPyFn6 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71226 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71226 /var/tmp/bdevperf.sock 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71226 ']' 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.254 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.511 [2024-11-26 19:21:25.722702] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:27.511 [2024-11-26 19:21:25.722784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71226 ] 00:13:27.511 [2024-11-26 19:21:25.870956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.511 [2024-11-26 19:21:25.926319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.769 [2024-11-26 19:21:25.984777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.769 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.769 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:27.769 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3ZpbYPyFn6 00:13:28.027 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:28.285 [2024-11-26 19:21:26.572508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.285 TLSTESTn1 00:13:28.285 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:28.542 Running I/O for 10 seconds... 00:13:30.411 4542.00 IOPS, 17.74 MiB/s [2024-11-26T19:21:29.786Z] 4579.50 IOPS, 17.89 MiB/s [2024-11-26T19:21:31.160Z] 4593.33 IOPS, 17.94 MiB/s [2024-11-26T19:21:32.095Z] 4604.75 IOPS, 17.99 MiB/s [2024-11-26T19:21:33.032Z] 4606.00 IOPS, 17.99 MiB/s [2024-11-26T19:21:33.967Z] 4605.67 IOPS, 17.99 MiB/s [2024-11-26T19:21:34.902Z] 4607.29 IOPS, 18.00 MiB/s [2024-11-26T19:21:35.838Z] 4610.12 IOPS, 18.01 MiB/s [2024-11-26T19:21:36.775Z] 4615.89 IOPS, 18.03 MiB/s [2024-11-26T19:21:37.034Z] 4615.50 IOPS, 18.03 MiB/s 00:13:38.594 Latency(us) 00:13:38.594 [2024-11-26T19:21:37.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.594 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:38.594 Verification LBA range: start 0x0 length 0x2000 00:13:38.594 TLSTESTn1 : 10.02 4621.02 18.05 0.00 0.00 27650.50 5302.46 22520.55 00:13:38.594 [2024-11-26T19:21:37.034Z] =================================================================================================================== 00:13:38.594 [2024-11-26T19:21:37.034Z] Total : 4621.02 18.05 0.00 0.00 27650.50 5302.46 22520.55 00:13:38.594 { 00:13:38.594 "results": [ 00:13:38.594 { 00:13:38.594 "job": "TLSTESTn1", 00:13:38.594 "core_mask": "0x4", 00:13:38.594 "workload": "verify", 00:13:38.594 "status": "finished", 00:13:38.594 "verify_range": { 00:13:38.594 "start": 0, 00:13:38.594 "length": 8192 00:13:38.594 }, 00:13:38.594 "queue_depth": 128, 00:13:38.594 "io_size": 4096, 00:13:38.594 "runtime": 10.015752, 00:13:38.594 "iops": 4621.020967771567, 00:13:38.594 "mibps": 18.050863155357682, 00:13:38.594 "io_failed": 0, 00:13:38.594 "io_timeout": 0, 00:13:38.594 "avg_latency_us": 27650.495752730727, 00:13:38.594 "min_latency_us": 5302.458181818181, 00:13:38.594 "max_latency_us": 22520.552727272727 00:13:38.594 } 00:13:38.594 ], 00:13:38.594 "core_count": 1 00:13:38.594 } 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71226 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71226 ']' 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71226 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71226 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:38.594 killing process with pid 71226 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71226' 00:13:38.594 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.594 00:13:38.594 Latency(us) 00:13:38.594 [2024-11-26T19:21:37.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.594 [2024-11-26T19:21:37.034Z] =================================================================================================================== 00:13:38.594 [2024-11-26T19:21:37.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71226 00:13:38.594 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71226 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nffhvXC6cN 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nffhvXC6cN 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nffhvXC6cN 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nffhvXC6cN 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71359 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71359 /var/tmp/bdevperf.sock 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71359 ']' 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.854 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.855 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.855 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.855 [2024-11-26 19:21:37.095590] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:38.855 [2024-11-26 19:21:37.095687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71359 ] 00:13:38.855 [2024-11-26 19:21:37.241889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.855 [2024-11-26 19:21:37.286460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.113 [2024-11-26 19:21:37.339272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:39.113 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.113 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:39.113 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nffhvXC6cN 00:13:39.371 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:39.660 [2024-11-26 19:21:37.943955] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:39.660 [2024-11-26 19:21:37.949830] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:39.660 [2024-11-26 19:21:37.950515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076ff0 (107): Transport endpoint is not connected 00:13:39.660 [2024-11-26 19:21:37.951518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076ff0 (9): Bad file descriptor 00:13:39.660 [2024-11-26 19:21:37.952503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:39.660 [2024-11-26 19:21:37.952539] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:39.660 [2024-11-26 19:21:37.952549] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:39.660 [2024-11-26 19:21:37.952564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:39.660 request: 00:13:39.660 { 00:13:39.660 "name": "TLSTEST", 00:13:39.660 "trtype": "tcp", 00:13:39.660 "traddr": "10.0.0.3", 00:13:39.660 "adrfam": "ipv4", 00:13:39.660 "trsvcid": "4420", 00:13:39.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.660 "prchk_reftag": false, 00:13:39.660 "prchk_guard": false, 00:13:39.660 "hdgst": false, 00:13:39.660 "ddgst": false, 00:13:39.660 "psk": "key0", 00:13:39.660 "allow_unrecognized_csi": false, 00:13:39.660 "method": "bdev_nvme_attach_controller", 00:13:39.660 "req_id": 1 00:13:39.660 } 00:13:39.660 Got JSON-RPC error response 00:13:39.660 response: 00:13:39.660 { 00:13:39.660 "code": -5, 00:13:39.660 "message": "Input/output error" 00:13:39.660 } 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71359 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71359 ']' 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71359 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.660 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71359 00:13:39.660 killing process with pid 71359 00:13:39.660 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.660 00:13:39.660 Latency(us) 00:13:39.660 [2024-11-26T19:21:38.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.660 [2024-11-26T19:21:38.100Z] =================================================================================================================== 00:13:39.660 [2024-11-26T19:21:38.100Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:39.660 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:39.660 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:39.660 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71359' 00:13:39.660 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71359 00:13:39.660 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71359 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3ZpbYPyFn6 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3ZpbYPyFn6 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3ZpbYPyFn6 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3ZpbYPyFn6 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71381 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71381 /var/tmp/bdevperf.sock 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71381 ']' 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.942 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.942 [2024-11-26 19:21:38.251738] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:39.942 [2024-11-26 19:21:38.251871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:13:40.201 [2024-11-26 19:21:38.393383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.201 [2024-11-26 19:21:38.441444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.201 [2024-11-26 19:21:38.494330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.136 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.136 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:41.136 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3ZpbYPyFn6 00:13:41.136 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:41.395 [2024-11-26 19:21:39.711359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.395 [2024-11-26 19:21:39.719947] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.395 [2024-11-26 19:21:39.719996] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.395 [2024-11-26 19:21:39.720043] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:41.395 [2024-11-26 19:21:39.720973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x775ff0 (107): Transport endpoint is not connected 00:13:41.395 [2024-11-26 19:21:39.721964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x775ff0 (9): Bad file descriptor 00:13:41.395 [2024-11-26 19:21:39.722961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:41.395 [2024-11-26 19:21:39.722988] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:41.395 [2024-11-26 19:21:39.722998] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:41.395 [2024-11-26 19:21:39.723012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:41.395 request: 00:13:41.395 { 00:13:41.395 "name": "TLSTEST", 00:13:41.395 "trtype": "tcp", 00:13:41.395 "traddr": "10.0.0.3", 00:13:41.395 "adrfam": "ipv4", 00:13:41.395 "trsvcid": "4420", 00:13:41.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.395 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:41.395 "prchk_reftag": false, 00:13:41.395 "prchk_guard": false, 00:13:41.395 "hdgst": false, 00:13:41.395 "ddgst": false, 00:13:41.395 "psk": "key0", 00:13:41.395 "allow_unrecognized_csi": false, 00:13:41.395 "method": "bdev_nvme_attach_controller", 00:13:41.395 "req_id": 1 00:13:41.395 } 00:13:41.395 Got JSON-RPC error response 00:13:41.395 response: 00:13:41.395 { 00:13:41.395 "code": -5, 00:13:41.395 "message": "Input/output error" 00:13:41.395 } 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71381 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71381 ']' 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71381 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71381 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71381' 00:13:41.395 killing process with pid 71381 00:13:41.395 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.395 00:13:41.395 Latency(us) 00:13:41.395 [2024-11-26T19:21:39.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.395 [2024-11-26T19:21:39.835Z] =================================================================================================================== 00:13:41.395 [2024-11-26T19:21:39.835Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71381 00:13:41.395 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71381 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ZpbYPyFn6 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ZpbYPyFn6 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ZpbYPyFn6 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3ZpbYPyFn6 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71404 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71404 /var/tmp/bdevperf.sock 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71404 ']' 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.654 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.654 [2024-11-26 19:21:39.992963] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:41.654 [2024-11-26 19:21:39.993070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71404 ] 00:13:41.913 [2024-11-26 19:21:40.135964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.913 [2024-11-26 19:21:40.181329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.913 [2024-11-26 19:21:40.234491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.913 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.913 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:41.913 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3ZpbYPyFn6 00:13:42.172 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.431 [2024-11-26 19:21:40.786768] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.431 [2024-11-26 19:21:40.791600] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.431 [2024-11-26 19:21:40.791651] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.431 [2024-11-26 19:21:40.791711] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:42.431 [2024-11-26 19:21:40.792378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9ff0 (107): Transport endpoint is not connected 00:13:42.431 [2024-11-26 19:21:40.793366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9ff0 (9): Bad file descriptor 00:13:42.431 [2024-11-26 19:21:40.794362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:42.431 [2024-11-26 19:21:40.794398] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:42.431 [2024-11-26 19:21:40.794423] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:42.431 [2024-11-26 19:21:40.794437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:42.431 request: 00:13:42.431 { 00:13:42.431 "name": "TLSTEST", 00:13:42.431 "trtype": "tcp", 00:13:42.431 "traddr": "10.0.0.3", 00:13:42.431 "adrfam": "ipv4", 00:13:42.431 "trsvcid": "4420", 00:13:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:42.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.431 "prchk_reftag": false, 00:13:42.431 "prchk_guard": false, 00:13:42.431 "hdgst": false, 00:13:42.431 "ddgst": false, 00:13:42.431 "psk": "key0", 00:13:42.431 "allow_unrecognized_csi": false, 00:13:42.431 "method": "bdev_nvme_attach_controller", 00:13:42.431 "req_id": 1 00:13:42.431 } 00:13:42.431 Got JSON-RPC error response 00:13:42.431 response: 00:13:42.431 { 00:13:42.431 "code": -5, 00:13:42.431 "message": "Input/output error" 00:13:42.431 } 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71404 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71404 ']' 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71404 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.431 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71404 00:13:42.431 killing process with pid 71404 00:13:42.431 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.431 00:13:42.431 Latency(us) 00:13:42.431 [2024-11-26T19:21:40.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.432 [2024-11-26T19:21:40.872Z] =================================================================================================================== 00:13:42.432 [2024-11-26T19:21:40.872Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.432 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:42.432 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:42.432 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71404' 00:13:42.432 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71404 00:13:42.432 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71404 00:13:42.690 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:42.690 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:42.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71426 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71426 /var/tmp/bdevperf.sock 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71426 ']' 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.691 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.691 [2024-11-26 19:21:41.084322] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:42.691 [2024-11-26 19:21:41.084450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71426 ] 00:13:42.950 [2024-11-26 19:21:41.232503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.950 [2024-11-26 19:21:41.276734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.950 [2024-11-26 19:21:41.328437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.209 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.209 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:43.209 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:43.468 [2024-11-26 19:21:41.657372] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:43.468 [2024-11-26 19:21:41.657425] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:43.468 request: 00:13:43.468 { 00:13:43.468 "name": "key0", 00:13:43.468 "path": "", 00:13:43.468 "method": "keyring_file_add_key", 00:13:43.468 "req_id": 1 00:13:43.468 } 00:13:43.468 Got JSON-RPC error response 00:13:43.468 response: 00:13:43.468 { 00:13:43.468 "code": -1, 00:13:43.468 "message": "Operation not permitted" 00:13:43.468 } 00:13:43.468 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:43.468 [2024-11-26 19:21:41.897494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:43.468 [2024-11-26 19:21:41.897766] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:43.468 request: 00:13:43.468 { 00:13:43.468 "name": "TLSTEST", 00:13:43.468 "trtype": "tcp", 00:13:43.468 "traddr": "10.0.0.3", 00:13:43.468 "adrfam": "ipv4", 00:13:43.468 "trsvcid": "4420", 00:13:43.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:43.468 "prchk_reftag": false, 00:13:43.468 "prchk_guard": false, 00:13:43.468 "hdgst": false, 00:13:43.468 "ddgst": false, 00:13:43.468 "psk": "key0", 00:13:43.468 "allow_unrecognized_csi": false, 00:13:43.468 "method": "bdev_nvme_attach_controller", 00:13:43.468 "req_id": 1 00:13:43.468 } 00:13:43.468 Got JSON-RPC error response 00:13:43.468 response: 00:13:43.468 { 00:13:43.468 "code": -126, 00:13:43.468 "message": "Required key not available" 00:13:43.468 } 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71426 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71426 ']' 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71426 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71426 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:43.727 killing process with pid 71426 00:13:43.727 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.727 00:13:43.727 Latency(us) 00:13:43.727 [2024-11-26T19:21:42.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.727 [2024-11-26T19:21:42.167Z] =================================================================================================================== 00:13:43.727 [2024-11-26T19:21:42.167Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71426' 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71426 00:13:43.727 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71426 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71001 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71001 ']' 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71001 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71001 00:13:43.727 killing process with pid 71001 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71001' 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71001 00:13:43.727 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71001 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.aPdljVs0ge 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.aPdljVs0ge 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.986 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71463 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71463 00:13:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71463 ']' 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.987 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.246 [2024-11-26 19:21:42.467543] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:44.246 [2024-11-26 19:21:42.467804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.246 [2024-11-26 19:21:42.610396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.246 [2024-11-26 19:21:42.663479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.246 [2024-11-26 19:21:42.663659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.246 [2024-11-26 19:21:42.663863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.246 [2024-11-26 19:21:42.664017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.246 [2024-11-26 19:21:42.664032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.246 [2024-11-26 19:21:42.664446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.506 [2024-11-26 19:21:42.717215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aPdljVs0ge 00:13:45.073 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:45.332 [2024-11-26 19:21:43.720607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.332 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:45.615 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:45.874 [2024-11-26 19:21:44.244744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.874 [2024-11-26 19:21:44.245298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:45.874 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.133 malloc0 00:13:46.133 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:46.391 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:13:46.649 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPdljVs0ge 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aPdljVs0ge 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71524 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71524 /var/tmp/bdevperf.sock 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71524 ']' 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.907 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.907 [2024-11-26 19:21:45.284148] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:46.907 [2024-11-26 19:21:45.284419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71524 ] 00:13:47.198 [2024-11-26 19:21:45.432581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.198 [2024-11-26 19:21:45.489934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.198 [2024-11-26 19:21:45.548443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.198 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.198 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:47.198 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:13:47.456 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:47.714 [2024-11-26 19:21:46.103629] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.973 TLSTESTn1 00:13:47.973 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:47.973 Running I/O for 10 seconds... 00:13:50.283 4736.00 IOPS, 18.50 MiB/s [2024-11-26T19:21:49.660Z] 4736.00 IOPS, 18.50 MiB/s [2024-11-26T19:21:50.597Z] 4699.33 IOPS, 18.36 MiB/s [2024-11-26T19:21:51.533Z] 4708.75 IOPS, 18.39 MiB/s [2024-11-26T19:21:52.492Z] 4714.80 IOPS, 18.42 MiB/s [2024-11-26T19:21:53.430Z] 4709.33 IOPS, 18.40 MiB/s [2024-11-26T19:21:54.367Z] 4692.57 IOPS, 18.33 MiB/s [2024-11-26T19:21:55.744Z] 4693.88 IOPS, 18.34 MiB/s [2024-11-26T19:21:56.681Z] 4695.67 IOPS, 18.34 MiB/s [2024-11-26T19:21:56.681Z] 4697.10 IOPS, 18.35 MiB/s 00:13:58.241 Latency(us) 00:13:58.241 [2024-11-26T19:21:56.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.241 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:58.241 Verification LBA range: start 0x0 length 0x2000 00:13:58.241 TLSTESTn1 : 10.02 4702.39 18.37 0.00 0.00 27171.36 5540.77 22878.02 00:13:58.241 [2024-11-26T19:21:56.681Z] =================================================================================================================== 00:13:58.241 [2024-11-26T19:21:56.681Z] Total : 4702.39 18.37 0.00 0.00 27171.36 5540.77 22878.02 00:13:58.241 { 00:13:58.241 "results": [ 00:13:58.241 { 00:13:58.241 "job": "TLSTESTn1", 00:13:58.241 "core_mask": "0x4", 00:13:58.241 "workload": "verify", 00:13:58.241 "status": "finished", 00:13:58.241 "verify_range": { 00:13:58.241 "start": 0, 00:13:58.241 "length": 8192 00:13:58.241 }, 00:13:58.241 "queue_depth": 128, 00:13:58.241 "io_size": 4096, 00:13:58.241 "runtime": 10.015968, 00:13:58.241 "iops": 4702.391221697194, 00:13:58.241 "mibps": 18.368715709754664, 00:13:58.241 "io_failed": 0, 00:13:58.241 "io_timeout": 0, 00:13:58.241 "avg_latency_us": 27171.363872384856, 00:13:58.241 "min_latency_us": 5540.770909090909, 00:13:58.241 "max_latency_us": 22878.02181818182 00:13:58.241 } 00:13:58.241 ], 00:13:58.241 "core_count": 1 00:13:58.241 } 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71524 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71524 ']' 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71524 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71524 00:13:58.241 killing process with pid 71524 00:13:58.241 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.241 00:13:58.241 Latency(us) 00:13:58.241 [2024-11-26T19:21:56.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.241 [2024-11-26T19:21:56.681Z] =================================================================================================================== 00:13:58.241 [2024-11-26T19:21:56.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71524' 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71524 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71524 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.aPdljVs0ge 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPdljVs0ge 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPdljVs0ge 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPdljVs0ge 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aPdljVs0ge 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71652 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71652 /var/tmp/bdevperf.sock 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71652 ']' 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:58.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.241 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.241 [2024-11-26 19:21:56.656421] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:58.241 [2024-11-26 19:21:56.656876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71652 ] 00:13:58.500 [2024-11-26 19:21:56.803617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.500 [2024-11-26 19:21:56.849461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.500 [2024-11-26 19:21:56.903167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.759 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.759 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:58.759 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:13:58.759 [2024-11-26 19:21:57.176685] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aPdljVs0ge': 0100666 00:13:58.759 [2024-11-26 19:21:57.176880] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:58.759 request: 00:13:58.759 { 00:13:58.759 "name": "key0", 00:13:58.759 "path": "/tmp/tmp.aPdljVs0ge", 00:13:58.759 "method": "keyring_file_add_key", 00:13:58.759 "req_id": 1 00:13:58.759 } 00:13:58.759 Got JSON-RPC error response 00:13:58.759 response: 00:13:58.759 { 00:13:58.759 "code": -1, 00:13:58.759 "message": "Operation not permitted" 00:13:58.759 } 00:13:58.759 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:59.018 [2024-11-26 19:21:57.444862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:59.018 [2024-11-26 19:21:57.445152] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:59.018 request: 00:13:59.018 { 00:13:59.018 "name": "TLSTEST", 00:13:59.018 "trtype": "tcp", 00:13:59.018 "traddr": "10.0.0.3", 00:13:59.018 "adrfam": "ipv4", 00:13:59.018 "trsvcid": "4420", 00:13:59.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.018 "prchk_reftag": false, 00:13:59.018 "prchk_guard": false, 00:13:59.018 "hdgst": false, 00:13:59.018 "ddgst": false, 00:13:59.018 "psk": "key0", 00:13:59.018 "allow_unrecognized_csi": false, 00:13:59.018 "method": "bdev_nvme_attach_controller", 00:13:59.018 "req_id": 1 00:13:59.018 } 00:13:59.018 Got JSON-RPC error response 00:13:59.018 response: 00:13:59.018 { 00:13:59.018 "code": -126, 00:13:59.018 "message": "Required key not available" 00:13:59.018 } 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71652 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71652 ']' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71652 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71652 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:59.277 killing process with pid 71652 00:13:59.277 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.277 00:13:59.277 Latency(us) 00:13:59.277 [2024-11-26T19:21:57.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.277 [2024-11-26T19:21:57.717Z] =================================================================================================================== 00:13:59.277 [2024-11-26T19:21:57.717Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71652' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71652 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71652 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71463 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71463 ']' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71463 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71463 00:13:59.277 killing process with pid 71463 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71463' 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71463 00:13:59.277 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71463 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71678 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71678 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71678 ']' 00:13:59.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.537 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.537 [2024-11-26 19:21:57.962582] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:13:59.537 [2024-11-26 19:21:57.962892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.796 [2024-11-26 19:21:58.107222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.796 [2024-11-26 19:21:58.154980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.796 [2024-11-26 19:21:58.155035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.796 [2024-11-26 19:21:58.155062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.796 [2024-11-26 19:21:58.155070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.796 [2024-11-26 19:21:58.155076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.796 [2024-11-26 19:21:58.155441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.796 [2024-11-26 19:21:58.207311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:14:00.733 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aPdljVs0ge 00:14:00.734 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:00.992 [2024-11-26 19:21:59.178020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.992 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:01.251 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:01.510 [2024-11-26 19:21:59.754177] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.510 [2024-11-26 19:21:59.754388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:01.510 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:01.769 malloc0 00:14:01.770 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:02.028 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:02.028 [2024-11-26 19:22:00.437264] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aPdljVs0ge': 0100666 00:14:02.028 [2024-11-26 19:22:00.437318] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:02.028 request: 00:14:02.028 { 00:14:02.028 "name": "key0", 00:14:02.028 "path": "/tmp/tmp.aPdljVs0ge", 00:14:02.028 "method": "keyring_file_add_key", 00:14:02.028 "req_id": 1 00:14:02.028 } 00:14:02.028 Got JSON-RPC error response 00:14:02.028 response: 00:14:02.028 { 00:14:02.028 "code": -1, 00:14:02.028 "message": "Operation not permitted" 00:14:02.028 } 00:14:02.029 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.287 [2024-11-26 19:22:00.669415] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:02.287 [2024-11-26 19:22:00.669481] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:02.287 request: 00:14:02.287 { 00:14:02.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.287 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.287 "psk": "key0", 00:14:02.287 "method": "nvmf_subsystem_add_host", 00:14:02.287 "req_id": 1 00:14:02.287 } 00:14:02.287 Got JSON-RPC error response 00:14:02.287 response: 00:14:02.287 { 00:14:02.287 "code": -32603, 00:14:02.287 "message": "Internal error" 00:14:02.287 } 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71678 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71678 ']' 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71678 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71678 00:14:02.287 killing process with pid 71678 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.287 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71678' 00:14:02.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71678 00:14:02.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71678 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.aPdljVs0ge 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71747 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71747 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71747 ']' 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.547 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.548 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.548 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.548 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.806 [2024-11-26 19:22:00.989891] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:02.806 [2024-11-26 19:22:00.990001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.806 [2024-11-26 19:22:01.131617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.806 [2024-11-26 19:22:01.181093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.806 [2024-11-26 19:22:01.181141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.806 [2024-11-26 19:22:01.181168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.807 [2024-11-26 19:22:01.181176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.807 [2024-11-26 19:22:01.181182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.807 [2024-11-26 19:22:01.181541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.807 [2024-11-26 19:22:01.232659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aPdljVs0ge 00:14:03.743 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:04.002 [2024-11-26 19:22:02.206571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.002 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:04.260 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:04.520 [2024-11-26 19:22:02.770669] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:04.520 [2024-11-26 19:22:02.770885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:04.520 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:04.781 malloc0 00:14:04.781 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:05.040 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:05.299 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:05.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71803 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71803 /var/tmp/bdevperf.sock 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71803 ']' 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.579 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.579 [2024-11-26 19:22:03.827377] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:05.579 [2024-11-26 19:22:03.827674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71803 ] 00:14:05.579 [2024-11-26 19:22:03.980401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.867 [2024-11-26 19:22:04.043364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.867 [2024-11-26 19:22:04.098403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.436 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.436 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:06.436 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:06.695 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.955 [2024-11-26 19:22:05.227222] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.955 TLSTESTn1 00:14:06.955 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:07.214 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:07.214 "subsystems": [ 00:14:07.214 { 00:14:07.214 "subsystem": "keyring", 00:14:07.214 "config": [ 00:14:07.214 { 00:14:07.214 "method": "keyring_file_add_key", 00:14:07.214 "params": { 00:14:07.214 "name": "key0", 00:14:07.214 "path": "/tmp/tmp.aPdljVs0ge" 00:14:07.214 } 00:14:07.214 } 00:14:07.214 ] 00:14:07.214 }, 00:14:07.214 { 00:14:07.214 "subsystem": "iobuf", 00:14:07.214 "config": [ 00:14:07.214 { 00:14:07.214 "method": "iobuf_set_options", 00:14:07.214 "params": { 00:14:07.214 "small_pool_count": 8192, 00:14:07.214 "large_pool_count": 1024, 00:14:07.214 "small_bufsize": 8192, 00:14:07.214 "large_bufsize": 135168, 00:14:07.214 "enable_numa": false 00:14:07.214 } 00:14:07.214 } 00:14:07.214 ] 00:14:07.214 }, 00:14:07.214 { 00:14:07.214 "subsystem": "sock", 00:14:07.214 "config": [ 00:14:07.214 { 00:14:07.214 "method": "sock_set_default_impl", 00:14:07.214 "params": { 00:14:07.214 "impl_name": "uring" 00:14:07.214 } 00:14:07.214 }, 00:14:07.214 { 00:14:07.214 "method": "sock_impl_set_options", 00:14:07.214 "params": { 00:14:07.214 "impl_name": "ssl", 00:14:07.214 "recv_buf_size": 4096, 00:14:07.214 "send_buf_size": 4096, 00:14:07.214 "enable_recv_pipe": true, 00:14:07.214 "enable_quickack": false, 00:14:07.214 "enable_placement_id": 0, 00:14:07.215 "enable_zerocopy_send_server": true, 00:14:07.215 "enable_zerocopy_send_client": false, 00:14:07.215 "zerocopy_threshold": 0, 00:14:07.215 "tls_version": 0, 00:14:07.215 "enable_ktls": false 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "sock_impl_set_options", 00:14:07.215 "params": { 00:14:07.215 "impl_name": "posix", 00:14:07.215 "recv_buf_size": 2097152, 00:14:07.215 "send_buf_size": 2097152, 00:14:07.215 "enable_recv_pipe": true, 00:14:07.215 "enable_quickack": false, 00:14:07.215 "enable_placement_id": 0, 00:14:07.215 "enable_zerocopy_send_server": true, 00:14:07.215 "enable_zerocopy_send_client": false, 00:14:07.215 "zerocopy_threshold": 0, 00:14:07.215 "tls_version": 0, 00:14:07.215 "enable_ktls": false 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "sock_impl_set_options", 00:14:07.215 "params": { 00:14:07.215 "impl_name": "uring", 00:14:07.215 "recv_buf_size": 2097152, 00:14:07.215 "send_buf_size": 2097152, 00:14:07.215 "enable_recv_pipe": true, 00:14:07.215 "enable_quickack": false, 00:14:07.215 "enable_placement_id": 0, 00:14:07.215 "enable_zerocopy_send_server": false, 00:14:07.215 "enable_zerocopy_send_client": false, 00:14:07.215 "zerocopy_threshold": 0, 00:14:07.215 "tls_version": 0, 00:14:07.215 "enable_ktls": false 00:14:07.215 } 00:14:07.215 } 00:14:07.215 ] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "vmd", 00:14:07.215 "config": [] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "accel", 00:14:07.215 "config": [ 00:14:07.215 { 00:14:07.215 "method": "accel_set_options", 00:14:07.215 "params": { 00:14:07.215 "small_cache_size": 128, 00:14:07.215 "large_cache_size": 16, 00:14:07.215 "task_count": 2048, 00:14:07.215 "sequence_count": 2048, 00:14:07.215 "buf_count": 2048 00:14:07.215 } 00:14:07.215 } 00:14:07.215 ] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "bdev", 00:14:07.215 "config": [ 00:14:07.215 { 00:14:07.215 "method": "bdev_set_options", 00:14:07.215 "params": { 00:14:07.215 "bdev_io_pool_size": 65535, 00:14:07.215 "bdev_io_cache_size": 256, 00:14:07.215 "bdev_auto_examine": true, 00:14:07.215 "iobuf_small_cache_size": 128, 00:14:07.215 "iobuf_large_cache_size": 16 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_raid_set_options", 00:14:07.215 "params": { 00:14:07.215 "process_window_size_kb": 1024, 00:14:07.215 "process_max_bandwidth_mb_sec": 0 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_iscsi_set_options", 00:14:07.215 "params": { 00:14:07.215 "timeout_sec": 30 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_nvme_set_options", 00:14:07.215 "params": { 00:14:07.215 "action_on_timeout": "none", 00:14:07.215 "timeout_us": 0, 00:14:07.215 "timeout_admin_us": 0, 00:14:07.215 "keep_alive_timeout_ms": 10000, 00:14:07.215 "arbitration_burst": 0, 00:14:07.215 "low_priority_weight": 0, 00:14:07.215 "medium_priority_weight": 0, 00:14:07.215 "high_priority_weight": 0, 00:14:07.215 "nvme_adminq_poll_period_us": 10000, 00:14:07.215 "nvme_ioq_poll_period_us": 0, 00:14:07.215 "io_queue_requests": 0, 00:14:07.215 "delay_cmd_submit": true, 00:14:07.215 "transport_retry_count": 4, 00:14:07.215 "bdev_retry_count": 3, 00:14:07.215 "transport_ack_timeout": 0, 00:14:07.215 "ctrlr_loss_timeout_sec": 0, 00:14:07.215 "reconnect_delay_sec": 0, 00:14:07.215 "fast_io_fail_timeout_sec": 0, 00:14:07.215 "disable_auto_failback": false, 00:14:07.215 "generate_uuids": false, 00:14:07.215 "transport_tos": 0, 00:14:07.215 "nvme_error_stat": false, 00:14:07.215 "rdma_srq_size": 0, 00:14:07.215 "io_path_stat": false, 00:14:07.215 "allow_accel_sequence": false, 00:14:07.215 "rdma_max_cq_size": 0, 00:14:07.215 "rdma_cm_event_timeout_ms": 0, 00:14:07.215 "dhchap_digests": [ 00:14:07.215 "sha256", 00:14:07.215 "sha384", 00:14:07.215 "sha512" 00:14:07.215 ], 00:14:07.215 "dhchap_dhgroups": [ 00:14:07.215 "null", 00:14:07.215 "ffdhe2048", 00:14:07.215 "ffdhe3072", 00:14:07.215 "ffdhe4096", 00:14:07.215 "ffdhe6144", 00:14:07.215 "ffdhe8192" 00:14:07.215 ] 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_nvme_set_hotplug", 00:14:07.215 "params": { 00:14:07.215 "period_us": 100000, 00:14:07.215 "enable": false 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_malloc_create", 00:14:07.215 "params": { 00:14:07.215 "name": "malloc0", 00:14:07.215 "num_blocks": 8192, 00:14:07.215 "block_size": 4096, 00:14:07.215 "physical_block_size": 4096, 00:14:07.215 "uuid": "4682a887-a69d-48cf-a65e-a0117a3fddaa", 00:14:07.215 "optimal_io_boundary": 0, 00:14:07.215 "md_size": 0, 00:14:07.215 "dif_type": 0, 00:14:07.215 "dif_is_head_of_md": false, 00:14:07.215 "dif_pi_format": 0 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "bdev_wait_for_examine" 00:14:07.215 } 00:14:07.215 ] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "nbd", 00:14:07.215 "config": [] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "scheduler", 00:14:07.215 "config": [ 00:14:07.215 { 00:14:07.215 "method": "framework_set_scheduler", 00:14:07.215 "params": { 00:14:07.215 "name": "static" 00:14:07.215 } 00:14:07.215 } 00:14:07.215 ] 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "subsystem": "nvmf", 00:14:07.215 "config": [ 00:14:07.215 { 00:14:07.215 "method": "nvmf_set_config", 00:14:07.215 "params": { 00:14:07.215 "discovery_filter": "match_any", 00:14:07.215 "admin_cmd_passthru": { 00:14:07.215 "identify_ctrlr": false 00:14:07.215 }, 00:14:07.215 "dhchap_digests": [ 00:14:07.215 "sha256", 00:14:07.215 "sha384", 00:14:07.215 "sha512" 00:14:07.215 ], 00:14:07.215 "dhchap_dhgroups": [ 00:14:07.215 "null", 00:14:07.215 "ffdhe2048", 00:14:07.215 "ffdhe3072", 00:14:07.215 "ffdhe4096", 00:14:07.215 "ffdhe6144", 00:14:07.215 "ffdhe8192" 00:14:07.215 ] 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "nvmf_set_max_subsystems", 00:14:07.215 "params": { 00:14:07.215 "max_subsystems": 1024 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "nvmf_set_crdt", 00:14:07.215 "params": { 00:14:07.215 "crdt1": 0, 00:14:07.215 "crdt2": 0, 00:14:07.215 "crdt3": 0 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "nvmf_create_transport", 00:14:07.215 "params": { 00:14:07.215 "trtype": "TCP", 00:14:07.215 "max_queue_depth": 128, 00:14:07.215 "max_io_qpairs_per_ctrlr": 127, 00:14:07.215 "in_capsule_data_size": 4096, 00:14:07.215 "max_io_size": 131072, 00:14:07.215 "io_unit_size": 131072, 00:14:07.215 "max_aq_depth": 128, 00:14:07.215 "num_shared_buffers": 511, 00:14:07.215 "buf_cache_size": 4294967295, 00:14:07.215 "dif_insert_or_strip": false, 00:14:07.215 "zcopy": false, 00:14:07.215 "c2h_success": false, 00:14:07.215 "sock_priority": 0, 00:14:07.215 "abort_timeout_sec": 1, 00:14:07.215 "ack_timeout": 0, 00:14:07.215 "data_wr_pool_size": 0 00:14:07.215 } 00:14:07.215 }, 00:14:07.215 { 00:14:07.215 "method": "nvmf_create_subsystem", 00:14:07.215 "params": { 00:14:07.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.215 "allow_any_host": false, 00:14:07.215 "serial_number": "SPDK00000000000001", 00:14:07.215 "model_number": "SPDK bdev Controller", 00:14:07.215 "max_namespaces": 10, 00:14:07.215 "min_cntlid": 1, 00:14:07.216 "max_cntlid": 65519, 00:14:07.216 "ana_reporting": false 00:14:07.216 } 00:14:07.216 }, 00:14:07.216 { 00:14:07.216 "method": "nvmf_subsystem_add_host", 00:14:07.216 "params": { 00:14:07.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.216 "host": "nqn.2016-06.io.spdk:host1", 00:14:07.216 "psk": "key0" 00:14:07.216 } 00:14:07.216 }, 00:14:07.216 { 00:14:07.216 "method": "nvmf_subsystem_add_ns", 00:14:07.216 "params": { 00:14:07.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.216 "namespace": { 00:14:07.216 "nsid": 1, 00:14:07.216 "bdev_name": "malloc0", 00:14:07.216 "nguid": "4682A887A69D48CFA65EA0117A3FDDAA", 00:14:07.216 "uuid": "4682a887-a69d-48cf-a65e-a0117a3fddaa", 00:14:07.216 "no_auto_visible": false 00:14:07.216 } 00:14:07.216 } 00:14:07.216 }, 00:14:07.216 { 00:14:07.216 "method": "nvmf_subsystem_add_listener", 00:14:07.216 "params": { 00:14:07.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.216 "listen_address": { 00:14:07.216 "trtype": "TCP", 00:14:07.216 "adrfam": "IPv4", 00:14:07.216 "traddr": "10.0.0.3", 00:14:07.216 "trsvcid": "4420" 00:14:07.216 }, 00:14:07.216 "secure_channel": true 00:14:07.216 } 00:14:07.216 } 00:14:07.216 ] 00:14:07.216 } 00:14:07.216 ] 00:14:07.216 }' 00:14:07.216 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:07.785 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:07.785 "subsystems": [ 00:14:07.785 { 00:14:07.785 "subsystem": "keyring", 00:14:07.785 "config": [ 00:14:07.785 { 00:14:07.785 "method": "keyring_file_add_key", 00:14:07.785 "params": { 00:14:07.785 "name": "key0", 00:14:07.785 "path": "/tmp/tmp.aPdljVs0ge" 00:14:07.785 } 00:14:07.785 } 00:14:07.785 ] 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "subsystem": "iobuf", 00:14:07.785 "config": [ 00:14:07.785 { 00:14:07.785 "method": "iobuf_set_options", 00:14:07.785 "params": { 00:14:07.785 "small_pool_count": 8192, 00:14:07.785 "large_pool_count": 1024, 00:14:07.785 "small_bufsize": 8192, 00:14:07.785 "large_bufsize": 135168, 00:14:07.785 "enable_numa": false 00:14:07.785 } 00:14:07.785 } 00:14:07.785 ] 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "subsystem": "sock", 00:14:07.785 "config": [ 00:14:07.785 { 00:14:07.785 "method": "sock_set_default_impl", 00:14:07.785 "params": { 00:14:07.785 "impl_name": "uring" 00:14:07.785 } 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "method": "sock_impl_set_options", 00:14:07.785 "params": { 00:14:07.785 "impl_name": "ssl", 00:14:07.785 "recv_buf_size": 4096, 00:14:07.785 "send_buf_size": 4096, 00:14:07.785 "enable_recv_pipe": true, 00:14:07.785 "enable_quickack": false, 00:14:07.785 "enable_placement_id": 0, 00:14:07.785 "enable_zerocopy_send_server": true, 00:14:07.785 "enable_zerocopy_send_client": false, 00:14:07.785 "zerocopy_threshold": 0, 00:14:07.785 "tls_version": 0, 00:14:07.785 "enable_ktls": false 00:14:07.785 } 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "method": "sock_impl_set_options", 00:14:07.785 "params": { 00:14:07.785 "impl_name": "posix", 00:14:07.785 "recv_buf_size": 2097152, 00:14:07.785 "send_buf_size": 2097152, 00:14:07.785 "enable_recv_pipe": true, 00:14:07.785 "enable_quickack": false, 00:14:07.785 "enable_placement_id": 0, 00:14:07.785 "enable_zerocopy_send_server": true, 00:14:07.785 "enable_zerocopy_send_client": false, 00:14:07.785 "zerocopy_threshold": 0, 00:14:07.785 "tls_version": 0, 00:14:07.785 "enable_ktls": false 00:14:07.785 } 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "method": "sock_impl_set_options", 00:14:07.785 "params": { 00:14:07.785 "impl_name": "uring", 00:14:07.785 "recv_buf_size": 2097152, 00:14:07.785 "send_buf_size": 2097152, 00:14:07.785 "enable_recv_pipe": true, 00:14:07.785 "enable_quickack": false, 00:14:07.785 "enable_placement_id": 0, 00:14:07.785 "enable_zerocopy_send_server": false, 00:14:07.785 "enable_zerocopy_send_client": false, 00:14:07.785 "zerocopy_threshold": 0, 00:14:07.785 "tls_version": 0, 00:14:07.785 "enable_ktls": false 00:14:07.785 } 00:14:07.785 } 00:14:07.785 ] 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "subsystem": "vmd", 00:14:07.785 "config": [] 00:14:07.785 }, 00:14:07.785 { 00:14:07.785 "subsystem": "accel", 00:14:07.785 "config": [ 00:14:07.785 { 00:14:07.785 "method": "accel_set_options", 00:14:07.785 "params": { 00:14:07.785 "small_cache_size": 128, 00:14:07.785 "large_cache_size": 16, 00:14:07.785 "task_count": 2048, 00:14:07.785 "sequence_count": 2048, 00:14:07.785 "buf_count": 2048 00:14:07.785 } 00:14:07.785 } 00:14:07.785 ] 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "subsystem": "bdev", 00:14:07.786 "config": [ 00:14:07.786 { 00:14:07.786 "method": "bdev_set_options", 00:14:07.786 "params": { 00:14:07.786 "bdev_io_pool_size": 65535, 00:14:07.786 "bdev_io_cache_size": 256, 00:14:07.786 "bdev_auto_examine": true, 00:14:07.786 "iobuf_small_cache_size": 128, 00:14:07.786 "iobuf_large_cache_size": 16 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_raid_set_options", 00:14:07.786 "params": { 00:14:07.786 "process_window_size_kb": 1024, 00:14:07.786 "process_max_bandwidth_mb_sec": 0 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_iscsi_set_options", 00:14:07.786 "params": { 00:14:07.786 "timeout_sec": 30 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_nvme_set_options", 00:14:07.786 "params": { 00:14:07.786 "action_on_timeout": "none", 00:14:07.786 "timeout_us": 0, 00:14:07.786 "timeout_admin_us": 0, 00:14:07.786 "keep_alive_timeout_ms": 10000, 00:14:07.786 "arbitration_burst": 0, 00:14:07.786 "low_priority_weight": 0, 00:14:07.786 "medium_priority_weight": 0, 00:14:07.786 "high_priority_weight": 0, 00:14:07.786 "nvme_adminq_poll_period_us": 10000, 00:14:07.786 "nvme_ioq_poll_period_us": 0, 00:14:07.786 "io_queue_requests": 512, 00:14:07.786 "delay_cmd_submit": true, 00:14:07.786 "transport_retry_count": 4, 00:14:07.786 "bdev_retry_count": 3, 00:14:07.786 "transport_ack_timeout": 0, 00:14:07.786 "ctrlr_loss_timeout_sec": 0, 00:14:07.786 "reconnect_delay_sec": 0, 00:14:07.786 "fast_io_fail_timeout_sec": 0, 00:14:07.786 "disable_auto_failback": false, 00:14:07.786 "generate_uuids": false, 00:14:07.786 "transport_tos": 0, 00:14:07.786 "nvme_error_stat": false, 00:14:07.786 "rdma_srq_size": 0, 00:14:07.786 "io_path_stat": false, 00:14:07.786 "allow_accel_sequence": false, 00:14:07.786 "rdma_max_cq_size": 0, 00:14:07.786 "rdma_cm_event_timeout_ms": 0, 00:14:07.786 "dhchap_digests": [ 00:14:07.786 "sha256", 00:14:07.786 "sha384", 00:14:07.786 "sha512" 00:14:07.786 ], 00:14:07.786 "dhchap_dhgroups": [ 00:14:07.786 "null", 00:14:07.786 "ffdhe2048", 00:14:07.786 "ffdhe3072", 00:14:07.786 "ffdhe4096", 00:14:07.786 "ffdhe6144", 00:14:07.786 "ffdhe8192" 00:14:07.786 ] 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_nvme_attach_controller", 00:14:07.786 "params": { 00:14:07.786 "name": "TLSTEST", 00:14:07.786 "trtype": "TCP", 00:14:07.786 "adrfam": "IPv4", 00:14:07.786 "traddr": "10.0.0.3", 00:14:07.786 "trsvcid": "4420", 00:14:07.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.786 "prchk_reftag": false, 00:14:07.786 "prchk_guard": false, 00:14:07.786 "ctrlr_loss_timeout_sec": 0, 00:14:07.786 "reconnect_delay_sec": 0, 00:14:07.786 "fast_io_fail_timeout_sec": 0, 00:14:07.786 "psk": "key0", 00:14:07.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.786 "hdgst": false, 00:14:07.786 "ddgst": false, 00:14:07.786 "multipath": "multipath" 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_nvme_set_hotplug", 00:14:07.786 "params": { 00:14:07.786 "period_us": 100000, 00:14:07.786 "enable": false 00:14:07.786 } 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "method": "bdev_wait_for_examine" 00:14:07.786 } 00:14:07.786 ] 00:14:07.786 }, 00:14:07.786 { 00:14:07.786 "subsystem": "nbd", 00:14:07.786 "config": [] 00:14:07.786 } 00:14:07.786 ] 00:14:07.786 }' 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71803 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71803 ']' 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71803 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71803 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71803' 00:14:07.786 killing process with pid 71803 00:14:07.786 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.786 00:14:07.786 Latency(us) 00:14:07.786 [2024-11-26T19:22:06.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.786 [2024-11-26T19:22:06.226Z] =================================================================================================================== 00:14:07.786 [2024-11-26T19:22:06.226Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71803 00:14:07.786 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71803 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71747 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71747 ']' 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71747 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71747 00:14:07.786 killing process with pid 71747 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71747' 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71747 00:14:07.786 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71747 00:14:08.046 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:08.046 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.046 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.046 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:08.046 "subsystems": [ 00:14:08.046 { 00:14:08.046 "subsystem": "keyring", 00:14:08.046 "config": [ 00:14:08.046 { 00:14:08.046 "method": "keyring_file_add_key", 00:14:08.046 "params": { 00:14:08.046 "name": "key0", 00:14:08.046 "path": "/tmp/tmp.aPdljVs0ge" 00:14:08.046 } 00:14:08.046 } 00:14:08.046 ] 00:14:08.046 }, 00:14:08.046 { 00:14:08.046 "subsystem": "iobuf", 00:14:08.046 "config": [ 00:14:08.046 { 00:14:08.046 "method": "iobuf_set_options", 00:14:08.046 "params": { 00:14:08.046 "small_pool_count": 8192, 00:14:08.046 "large_pool_count": 1024, 00:14:08.046 "small_bufsize": 8192, 00:14:08.046 "large_bufsize": 135168, 00:14:08.046 "enable_numa": false 00:14:08.046 } 00:14:08.046 } 00:14:08.046 ] 00:14:08.046 }, 00:14:08.046 { 00:14:08.046 "subsystem": "sock", 00:14:08.046 "config": [ 00:14:08.046 { 00:14:08.046 "method": "sock_set_default_impl", 00:14:08.046 "params": { 00:14:08.046 "impl_name": "uring" 00:14:08.046 } 00:14:08.046 }, 00:14:08.046 { 00:14:08.046 "method": "sock_impl_set_options", 00:14:08.046 "params": { 00:14:08.046 "impl_name": "ssl", 00:14:08.046 "recv_buf_size": 4096, 00:14:08.046 "send_buf_size": 4096, 00:14:08.046 "enable_recv_pipe": true, 00:14:08.046 "enable_quickack": false, 00:14:08.046 "enable_placement_id": 0, 00:14:08.046 "enable_zerocopy_send_server": true, 00:14:08.046 "enable_zerocopy_send_client": false, 00:14:08.046 "zerocopy_threshold": 0, 00:14:08.046 "tls_version": 0, 00:14:08.046 "enable_ktls": false 00:14:08.046 } 00:14:08.046 }, 00:14:08.046 { 00:14:08.046 "method": "sock_impl_set_options", 00:14:08.046 "params": { 00:14:08.046 "impl_name": "posix", 00:14:08.046 "recv_buf_size": 2097152, 00:14:08.046 "send_buf_size": 2097152, 00:14:08.046 "enable_recv_pipe": true, 00:14:08.046 "enable_quickack": false, 00:14:08.046 "enable_placement_id": 0, 00:14:08.046 "enable_zerocopy_send_server": true, 00:14:08.046 "enable_zerocopy_send_client": false, 00:14:08.046 "zerocopy_threshold": 0, 00:14:08.047 "tls_version": 0, 00:14:08.047 "enable_ktls": false 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "sock_impl_set_options", 00:14:08.047 "params": { 00:14:08.047 "impl_name": "uring", 00:14:08.047 "recv_buf_size": 2097152, 00:14:08.047 "send_buf_size": 2097152, 00:14:08.047 "enable_recv_pipe": true, 00:14:08.047 "enable_quickack": false, 00:14:08.047 "enable_placement_id": 0, 00:14:08.047 "enable_zerocopy_send_server": false, 00:14:08.047 "enable_zerocopy_send_client": false, 00:14:08.047 "zerocopy_threshold": 0, 00:14:08.047 "tls_version": 0, 00:14:08.047 "enable_ktls": false 00:14:08.047 } 00:14:08.047 } 00:14:08.047 ] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "vmd", 00:14:08.047 "config": [] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "accel", 00:14:08.047 "config": [ 00:14:08.047 { 00:14:08.047 "method": "accel_set_options", 00:14:08.047 "params": { 00:14:08.047 "small_cache_size": 128, 00:14:08.047 "large_cache_size": 16, 00:14:08.047 "task_count": 2048, 00:14:08.047 "sequence_count": 2048, 00:14:08.047 "buf_count": 2048 00:14:08.047 } 00:14:08.047 } 00:14:08.047 ] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "bdev", 00:14:08.047 "config": [ 00:14:08.047 { 00:14:08.047 "method": "bdev_set_options", 00:14:08.047 "params": { 00:14:08.047 "bdev_io_pool_size": 65535, 00:14:08.047 "bdev_io_cache_size": 256, 00:14:08.047 "bdev_auto_examine": true, 00:14:08.047 "iobuf_small_cache_size": 128, 00:14:08.047 "iobuf_large_cache_size": 16 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_raid_set_options", 00:14:08.047 "params": { 00:14:08.047 "process_window_size_kb": 1024, 00:14:08.047 "process_max_bandwidth_mb_sec": 0 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_iscsi_set_options", 00:14:08.047 "params": { 00:14:08.047 "timeout_sec": 30 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_nvme_set_options", 00:14:08.047 "params": { 00:14:08.047 "action_on_timeout": "none", 00:14:08.047 "timeout_us": 0, 00:14:08.047 "timeout_admin_us": 0, 00:14:08.047 "keep_alive_timeout_ms": 10000, 00:14:08.047 "arbitration_burst": 0, 00:14:08.047 "low_priority_weight": 0, 00:14:08.047 "medium_priority_weight": 0, 00:14:08.047 "high_priority_weight": 0, 00:14:08.047 "nvme_adminq_poll_period_us": 10000, 00:14:08.047 "nvme_ioq_poll_period_us": 0, 00:14:08.047 "io_queue_requests": 0, 00:14:08.047 "delay_cmd_submit": true, 00:14:08.047 "transport_retry_count": 4, 00:14:08.047 "bdev_retry_count": 3, 00:14:08.047 "transport_ack_timeout": 0, 00:14:08.047 "ctrlr_loss_timeout_sec": 0, 00:14:08.047 "reconnect_delay_sec": 0, 00:14:08.047 "fast_io_fail_timeout_sec": 0, 00:14:08.047 "disable_auto_failback": false, 00:14:08.047 "generate_uuids": false, 00:14:08.047 "transport_tos": 0, 00:14:08.047 "nvme_error_stat": false, 00:14:08.047 "rdma_srq_size": 0, 00:14:08.047 "io_path_stat": false, 00:14:08.047 "allow_accel_sequence": false, 00:14:08.047 "rdma_max_cq_size": 0, 00:14:08.047 "rdma_cm_event_timeout_ms": 0, 00:14:08.047 "dhchap_digests": [ 00:14:08.047 "sha256", 00:14:08.047 "sha384", 00:14:08.047 "sha512" 00:14:08.047 ], 00:14:08.047 "dhchap_dhgroups": [ 00:14:08.047 "null", 00:14:08.047 "ffdhe2048", 00:14:08.047 "ffdhe3072", 00:14:08.047 "ffdhe4096", 00:14:08.047 "ffdhe6144", 00:14:08.047 "ffdhe8192" 00:14:08.047 ] 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_nvme_set_hotplug", 00:14:08.047 "params": { 00:14:08.047 "period_us": 100000, 00:14:08.047 "enable": false 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_malloc_create", 00:14:08.047 "params": { 00:14:08.047 "name": "malloc0", 00:14:08.047 "num_blocks": 8192, 00:14:08.047 "block_size": 4096, 00:14:08.047 "physical_block_size": 4096, 00:14:08.047 "uuid": "4682a887-a69d-48cf-a65e-a0117a3fddaa", 00:14:08.047 "optimal_io_boundary": 0, 00:14:08.047 "md_size": 0, 00:14:08.047 "dif_type": 0, 00:14:08.047 "dif_is_head_of_md": false, 00:14:08.047 "dif_pi_format": 0 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "bdev_wait_for_examine" 00:14:08.047 } 00:14:08.047 ] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "nbd", 00:14:08.047 "config": [] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "scheduler", 00:14:08.047 "config": [ 00:14:08.047 { 00:14:08.047 "method": "framework_set_scheduler", 00:14:08.047 "params": { 00:14:08.047 "name": "static" 00:14:08.047 } 00:14:08.047 } 00:14:08.047 ] 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "subsystem": "nvmf", 00:14:08.047 "config": [ 00:14:08.047 { 00:14:08.047 "method": "nvmf_set_config", 00:14:08.047 "params": { 00:14:08.047 "discovery_filter": "match_any", 00:14:08.047 "admin_cmd_passthru": { 00:14:08.047 "identify_ctrlr": false 00:14:08.047 }, 00:14:08.047 "dhchap_digests": [ 00:14:08.047 "sha256", 00:14:08.047 "sha384", 00:14:08.047 "sha512" 00:14:08.047 ], 00:14:08.047 "dhchap_dhgroups": [ 00:14:08.047 "null", 00:14:08.047 "ffdhe2048", 00:14:08.047 "ffdhe3072", 00:14:08.047 "ffdhe4096", 00:14:08.047 "ffdhe6144", 00:14:08.047 "ffdhe8192" 00:14:08.047 ] 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "nvmf_set_max_subsystems", 00:14:08.047 "params": { 00:14:08.047 "max_subsystems": 1024 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "nvmf_set_crdt", 00:14:08.047 "params": { 00:14:08.047 "crdt1": 0, 00:14:08.047 "crdt2": 0, 00:14:08.047 "crdt3": 0 00:14:08.047 } 00:14:08.047 }, 00:14:08.047 { 00:14:08.047 "method": "nvmf_create_transport", 00:14:08.047 "params": { 00:14:08.047 "trtype": "TCP", 00:14:08.047 "max_queue_depth": 128, 00:14:08.047 "max_io_qpairs_per_ctrlr": 127, 00:14:08.047 "in_capsule_data_size": 4096, 00:14:08.047 "max_io_size": 131072, 00:14:08.047 "io_unit_size": 131072, 00:14:08.047 "max_aq_depth": 128, 00:14:08.047 "num_shared_buffers": 511, 00:14:08.047 "buf_cache_size": 4294967295, 00:14:08.047 "dif_insert_or_strip": false, 00:14:08.047 "zcopy": false, 00:14:08.047 "c2h_success": false, 00:14:08.047 "sock_priority": 0, 00:14:08.048 "abort_timeout_sec": 1, 00:14:08.048 "ack_timeout": 0, 00:14:08.048 "data_wr_pool_size": 0 00:14:08.048 } 00:14:08.048 }, 00:14:08.048 { 00:14:08.048 "method": "nvmf_create_subsystem", 00:14:08.048 "params": { 00:14:08.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.048 "allow_any_host": false, 00:14:08.048 "serial_number": "SPDK00000000000001", 00:14:08.048 "model_number": "SPDK bdev Controller", 00:14:08.048 "max_namespaces": 10, 00:14:08.048 "min_cntlid": 1, 00:14:08.048 "max_cntlid": 65519, 00:14:08.048 "ana_reporting": false 00:14:08.048 } 00:14:08.048 }, 00:14:08.048 { 00:14:08.048 "method": "nvmf_subsystem_add_host", 00:14:08.048 "params": { 00:14:08.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.048 "host": "nqn.2016-06.io.spdk:host1", 00:14:08.048 "psk": "key0" 00:14:08.048 } 00:14:08.048 }, 00:14:08.048 { 00:14:08.048 "method": "nvmf_subsystem_add_ns", 00:14:08.048 "params": { 00:14:08.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.048 "namespace": { 00:14:08.048 "nsid": 1, 00:14:08.048 "bdev_name": "malloc0", 00:14:08.048 "nguid": "4682A887A69D48CFA65EA0117A3FDDAA", 00:14:08.048 "uuid": "4682a887-a69d-48cf-a65e-a0117a3fddaa", 00:14:08.048 "no_auto_visible": false 00:14:08.048 } 00:14:08.048 } 00:14:08.048 }, 00:14:08.048 { 00:14:08.048 "method": "nvmf_subsystem_add_listener", 00:14:08.048 "params": { 00:14:08.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.048 "listen_address": { 00:14:08.048 "trtype": "TCP", 00:14:08.048 "adrfam": "IPv4", 00:14:08.048 "traddr": "10.0.0.3", 00:14:08.048 "trsvcid": "4420" 00:14:08.048 }, 00:14:08.048 "secure_channel": true 00:14:08.048 } 00:14:08.048 } 00:14:08.048 ] 00:14:08.048 } 00:14:08.048 ] 00:14:08.048 }' 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71847 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71847 00:14:08.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71847 ']' 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.048 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.048 [2024-11-26 19:22:06.461312] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:08.048 [2024-11-26 19:22:06.461561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.307 [2024-11-26 19:22:06.609024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.307 [2024-11-26 19:22:06.652042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.307 [2024-11-26 19:22:06.652109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.307 [2024-11-26 19:22:06.652136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.307 [2024-11-26 19:22:06.652144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.307 [2024-11-26 19:22:06.652151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.307 [2024-11-26 19:22:06.652539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.566 [2024-11-26 19:22:06.817658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.566 [2024-11-26 19:22:06.891883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.566 [2024-11-26 19:22:06.923846] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:08.566 [2024-11-26 19:22:06.924116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71879 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71879 /var/tmp/bdevperf.sock 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71879 ']' 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.135 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:09.135 "subsystems": [ 00:14:09.135 { 00:14:09.135 "subsystem": "keyring", 00:14:09.135 "config": [ 00:14:09.135 { 00:14:09.135 "method": "keyring_file_add_key", 00:14:09.135 "params": { 00:14:09.135 "name": "key0", 00:14:09.135 "path": "/tmp/tmp.aPdljVs0ge" 00:14:09.135 } 00:14:09.135 } 00:14:09.135 ] 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "subsystem": "iobuf", 00:14:09.135 "config": [ 00:14:09.135 { 00:14:09.135 "method": "iobuf_set_options", 00:14:09.135 "params": { 00:14:09.135 "small_pool_count": 8192, 00:14:09.135 "large_pool_count": 1024, 00:14:09.135 "small_bufsize": 8192, 00:14:09.135 "large_bufsize": 135168, 00:14:09.135 "enable_numa": false 00:14:09.135 } 00:14:09.135 } 00:14:09.135 ] 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "subsystem": "sock", 00:14:09.135 "config": [ 00:14:09.135 { 00:14:09.135 "method": "sock_set_default_impl", 00:14:09.135 "params": { 00:14:09.135 "impl_name": "uring" 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "sock_impl_set_options", 00:14:09.135 "params": { 00:14:09.135 "impl_name": "ssl", 00:14:09.135 "recv_buf_size": 4096, 00:14:09.135 "send_buf_size": 4096, 00:14:09.135 "enable_recv_pipe": true, 00:14:09.135 "enable_quickack": false, 00:14:09.135 "enable_placement_id": 0, 00:14:09.135 "enable_zerocopy_send_server": true, 00:14:09.135 "enable_zerocopy_send_client": false, 00:14:09.135 "zerocopy_threshold": 0, 00:14:09.135 "tls_version": 0, 00:14:09.135 "enable_ktls": false 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "sock_impl_set_options", 00:14:09.135 "params": { 00:14:09.135 "impl_name": "posix", 00:14:09.135 "recv_buf_size": 2097152, 00:14:09.135 "send_buf_size": 2097152, 00:14:09.135 "enable_recv_pipe": true, 00:14:09.135 "enable_quickack": false, 00:14:09.135 "enable_placement_id": 0, 00:14:09.135 "enable_zerocopy_send_server": true, 00:14:09.135 "enable_zerocopy_send_client": false, 00:14:09.135 "zerocopy_threshold": 0, 00:14:09.135 "tls_version": 0, 00:14:09.135 "enable_ktls": false 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "sock_impl_set_options", 00:14:09.135 "params": { 00:14:09.135 "impl_name": "uring", 00:14:09.135 "recv_buf_size": 2097152, 00:14:09.135 "send_buf_size": 2097152, 00:14:09.135 "enable_recv_pipe": true, 00:14:09.135 "enable_quickack": false, 00:14:09.135 "enable_placement_id": 0, 00:14:09.135 "enable_zerocopy_send_server": false, 00:14:09.135 "enable_zerocopy_send_client": false, 00:14:09.135 "zerocopy_threshold": 0, 00:14:09.135 "tls_version": 0, 00:14:09.135 "enable_ktls": false 00:14:09.135 } 00:14:09.135 } 00:14:09.135 ] 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "subsystem": "vmd", 00:14:09.135 "config": [] 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "subsystem": "accel", 00:14:09.135 "config": [ 00:14:09.135 { 00:14:09.135 "method": "accel_set_options", 00:14:09.135 "params": { 00:14:09.135 "small_cache_size": 128, 00:14:09.135 "large_cache_size": 16, 00:14:09.135 "task_count": 2048, 00:14:09.135 "sequence_count": 2048, 00:14:09.135 "buf_count": 2048 00:14:09.135 } 00:14:09.135 } 00:14:09.135 ] 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "subsystem": "bdev", 00:14:09.135 "config": [ 00:14:09.135 { 00:14:09.135 "method": "bdev_set_options", 00:14:09.135 "params": { 00:14:09.135 "bdev_io_pool_size": 65535, 00:14:09.135 "bdev_io_cache_size": 256, 00:14:09.135 "bdev_auto_examine": true, 00:14:09.135 "iobuf_small_cache_size": 128, 00:14:09.135 "iobuf_large_cache_size": 16 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "bdev_raid_set_options", 00:14:09.135 "params": { 00:14:09.135 "process_window_size_kb": 1024, 00:14:09.135 "process_max_bandwidth_mb_sec": 0 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "bdev_iscsi_set_options", 00:14:09.135 "params": { 00:14:09.135 "timeout_sec": 30 00:14:09.135 } 00:14:09.135 }, 00:14:09.135 { 00:14:09.135 "method": "bdev_nvme_set_options", 00:14:09.135 "params": { 00:14:09.135 "action_on_timeout": "none", 00:14:09.135 "timeout_us": 0, 00:14:09.135 "timeout_admin_us": 0, 00:14:09.135 "keep_alive_timeout_ms": 10000, 00:14:09.135 "arbitration_burst": 0, 00:14:09.135 "low_priority_weight": 0, 00:14:09.135 "medium_priority_weight": 0, 00:14:09.135 "high_priority_weight": 0, 00:14:09.135 "nvme_adminq_poll_period_us": 10000, 00:14:09.135 "nvme_ioq_poll_period_us": 0, 00:14:09.135 "io_queue_requests": 512, 00:14:09.135 "delay_cmd_submit": true, 00:14:09.135 "transport_retry_count": 4, 00:14:09.135 "bdev_retry_count": 3, 00:14:09.135 "transport_ack_timeout": 0, 00:14:09.135 "ctrlr_loss_timeout_sec": 0, 00:14:09.135 "reconnect_delay_sec": 0, 00:14:09.135 "fast_io_fail_timeout_sec": 0, 00:14:09.135 "disable_auto_failback": false, 00:14:09.135 "generate_uuids": false, 00:14:09.135 "transport_tos": 0, 00:14:09.135 "nvme_error_stat": false, 00:14:09.135 "rdma_srq_size": 0, 00:14:09.135 "io_path_stat": false, 00:14:09.135 "allow_accel_sequence": false, 00:14:09.135 "rdma_max_cq_size": 0, 00:14:09.135 "rdma_cm_event_timeout_ms": 0, 00:14:09.135 "dhchap_digests": [ 00:14:09.135 "sha256", 00:14:09.135 "sha384", 00:14:09.135 "sha512" 00:14:09.135 ], 00:14:09.135 "dhchap_dhgroups": [ 00:14:09.135 "null", 00:14:09.136 "ffdhe2048", 00:14:09.136 "ffdhe3072", 00:14:09.136 "ffdhe4096", 00:14:09.136 "ffdhe6144", 00:14:09.136 "ffdhe8192" 00:14:09.136 ] 00:14:09.136 } 00:14:09.136 }, 00:14:09.136 { 00:14:09.136 "method": "bdev_nvme_attach_controller", 00:14:09.136 "params": { 00:14:09.136 "name": "TLSTEST", 00:14:09.136 "trtype": "TCP", 00:14:09.136 "adrfam": "IPv4", 00:14:09.136 "traddr": "10.0.0.3", 00:14:09.136 "trsvcid": "4420", 00:14:09.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.136 "prchk_reftag": false, 00:14:09.136 "prchk_guard": false, 00:14:09.136 "ctrlr_loss_timeout_sec": 0, 00:14:09.136 "reconnect_delay_sec": 0, 00:14:09.136 "fast_io_fail_timeout_sec": 0, 00:14:09.136 "psk": "key0", 00:14:09.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.136 "hdgst": false, 00:14:09.136 "ddgst": false, 00:14:09.136 "multipath": "multipath" 00:14:09.136 } 00:14:09.136 }, 00:14:09.136 { 00:14:09.136 "method": "bdev_nvme_set_hotplug", 00:14:09.136 "params": { 00:14:09.136 "period_us": 100000, 00:14:09.136 "enable": false 00:14:09.136 } 00:14:09.136 }, 00:14:09.136 { 00:14:09.136 "method": "bdev_wait_for_examine" 00:14:09.136 } 00:14:09.136 ] 00:14:09.136 }, 00:14:09.136 { 00:14:09.136 "subsystem": "nbd", 00:14:09.136 "config": [] 00:14:09.136 } 00:14:09.136 ] 00:14:09.136 }' 00:14:09.136 [2024-11-26 19:22:07.533816] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:09.136 [2024-11-26 19:22:07.534096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71879 ] 00:14:09.395 [2024-11-26 19:22:07.683001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.395 [2024-11-26 19:22:07.747794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.654 [2024-11-26 19:22:07.884332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.654 [2024-11-26 19:22:07.929796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.223 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.223 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.223 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:10.223 Running I/O for 10 seconds... 00:14:12.541 4602.00 IOPS, 17.98 MiB/s [2024-11-26T19:22:11.919Z] 4608.00 IOPS, 18.00 MiB/s [2024-11-26T19:22:12.857Z] 4565.33 IOPS, 17.83 MiB/s [2024-11-26T19:22:13.794Z] 4544.00 IOPS, 17.75 MiB/s [2024-11-26T19:22:14.761Z] 4543.00 IOPS, 17.75 MiB/s [2024-11-26T19:22:15.697Z] 4540.33 IOPS, 17.74 MiB/s [2024-11-26T19:22:16.633Z] 4555.14 IOPS, 17.79 MiB/s [2024-11-26T19:22:18.010Z] 4574.00 IOPS, 17.87 MiB/s [2024-11-26T19:22:18.950Z] 4586.56 IOPS, 17.92 MiB/s [2024-11-26T19:22:18.950Z] 4595.80 IOPS, 17.95 MiB/s 00:14:20.510 Latency(us) 00:14:20.510 [2024-11-26T19:22:18.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.510 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:20.510 Verification LBA range: start 0x0 length 0x2000 00:14:20.510 TLSTESTn1 : 10.01 4602.09 17.98 0.00 0.00 27766.03 5004.57 20614.05 00:14:20.510 [2024-11-26T19:22:18.950Z] =================================================================================================================== 00:14:20.510 [2024-11-26T19:22:18.950Z] Total : 4602.09 17.98 0.00 0.00 27766.03 5004.57 20614.05 00:14:20.510 { 00:14:20.510 "results": [ 00:14:20.510 { 00:14:20.510 "job": "TLSTESTn1", 00:14:20.510 "core_mask": "0x4", 00:14:20.510 "workload": "verify", 00:14:20.510 "status": "finished", 00:14:20.510 "verify_range": { 00:14:20.510 "start": 0, 00:14:20.510 "length": 8192 00:14:20.510 }, 00:14:20.510 "queue_depth": 128, 00:14:20.510 "io_size": 4096, 00:14:20.510 "runtime": 10.013704, 00:14:20.510 "iops": 4602.093291353529, 00:14:20.510 "mibps": 17.976926919349722, 00:14:20.510 "io_failed": 0, 00:14:20.510 "io_timeout": 0, 00:14:20.510 "avg_latency_us": 27766.029996764802, 00:14:20.510 "min_latency_us": 5004.567272727273, 00:14:20.510 "max_latency_us": 20614.05090909091 00:14:20.510 } 00:14:20.510 ], 00:14:20.510 "core_count": 1 00:14:20.510 } 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71879 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71879 ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71879 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71879 00:14:20.510 killing process with pid 71879 00:14:20.510 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.510 00:14:20.510 Latency(us) 00:14:20.510 [2024-11-26T19:22:18.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.510 [2024-11-26T19:22:18.950Z] =================================================================================================================== 00:14:20.510 [2024-11-26T19:22:18.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71879' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71879 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71879 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71847 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71847 ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71847 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71847 00:14:20.510 killing process with pid 71847 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71847' 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71847 00:14:20.510 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71847 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72022 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:20.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72022 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72022 ']' 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.770 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.770 [2024-11-26 19:22:19.148929] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:20.771 [2024-11-26 19:22:19.149018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.030 [2024-11-26 19:22:19.303223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.030 [2024-11-26 19:22:19.353986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.030 [2024-11-26 19:22:19.354050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.030 [2024-11-26 19:22:19.354071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.030 [2024-11-26 19:22:19.354088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.030 [2024-11-26 19:22:19.354101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.030 [2024-11-26 19:22:19.354626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.030 [2024-11-26 19:22:19.412067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.aPdljVs0ge 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aPdljVs0ge 00:14:21.965 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:22.224 [2024-11-26 19:22:20.439548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.224 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:22.483 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:22.742 [2024-11-26 19:22:20.963668] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.742 [2024-11-26 19:22:20.963963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.742 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:23.001 malloc0 00:14:23.002 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.261 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:23.521 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72073 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72073 /var/tmp/bdevperf.sock 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72073 ']' 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.781 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.781 [2024-11-26 19:22:22.056274] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:23.781 [2024-11-26 19:22:22.056522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72073 ] 00:14:23.781 [2024-11-26 19:22:22.206630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.041 [2024-11-26 19:22:22.264887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.041 [2024-11-26 19:22:22.321373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.041 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.041 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:24.041 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:24.307 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:24.569 [2024-11-26 19:22:22.893434] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.569 nvme0n1 00:14:24.569 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.828 Running I/O for 1 seconds... 00:14:25.780 4445.00 IOPS, 17.36 MiB/s 00:14:25.780 Latency(us) 00:14:25.780 [2024-11-26T19:22:24.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.780 Verification LBA range: start 0x0 length 0x2000 00:14:25.780 nvme0n1 : 1.02 4475.41 17.48 0.00 0.00 28292.41 5928.03 22163.08 00:14:25.780 [2024-11-26T19:22:24.220Z] =================================================================================================================== 00:14:25.780 [2024-11-26T19:22:24.220Z] Total : 4475.41 17.48 0.00 0.00 28292.41 5928.03 22163.08 00:14:25.780 { 00:14:25.780 "results": [ 00:14:25.780 { 00:14:25.780 "job": "nvme0n1", 00:14:25.780 "core_mask": "0x2", 00:14:25.780 "workload": "verify", 00:14:25.780 "status": "finished", 00:14:25.780 "verify_range": { 00:14:25.780 "start": 0, 00:14:25.780 "length": 8192 00:14:25.780 }, 00:14:25.780 "queue_depth": 128, 00:14:25.780 "io_size": 4096, 00:14:25.780 "runtime": 1.021806, 00:14:25.780 "iops": 4475.409226408927, 00:14:25.780 "mibps": 17.48206729065987, 00:14:25.780 "io_failed": 0, 00:14:25.780 "io_timeout": 0, 00:14:25.780 "avg_latency_us": 28292.405685545593, 00:14:25.780 "min_latency_us": 5928.029090909091, 00:14:25.780 "max_latency_us": 22163.083636363637 00:14:25.780 } 00:14:25.780 ], 00:14:25.780 "core_count": 1 00:14:25.780 } 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72073 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72073 ']' 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72073 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72073 00:14:25.780 killing process with pid 72073 00:14:25.780 Received shutdown signal, test time was about 1.000000 seconds 00:14:25.780 00:14:25.780 Latency(us) 00:14:25.780 [2024-11-26T19:22:24.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.780 [2024-11-26T19:22:24.220Z] =================================================================================================================== 00:14:25.780 [2024-11-26T19:22:24.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72073' 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72073 00:14:25.780 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72073 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72022 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72022 ']' 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72022 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72022 00:14:26.077 killing process with pid 72022 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72022' 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72022 00:14:26.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72022 00:14:26.336 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:26.336 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.336 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.336 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.336 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72117 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72117 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72117 ']' 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.337 [2024-11-26 19:22:24.633532] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:26.337 [2024-11-26 19:22:24.633767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.596 [2024-11-26 19:22:24.775022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.596 [2024-11-26 19:22:24.825171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.596 [2024-11-26 19:22:24.825390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.596 [2024-11-26 19:22:24.825533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.596 [2024-11-26 19:22:24.825589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.596 [2024-11-26 19:22:24.825683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.596 [2024-11-26 19:22:24.826107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.596 [2024-11-26 19:22:24.878744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.596 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.596 [2024-11-26 19:22:24.987771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.596 malloc0 00:14:26.596 [2024-11-26 19:22:25.019095] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:26.596 [2024-11-26 19:22:25.019326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72141 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72141 /var/tmp/bdevperf.sock 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72141 ']' 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.855 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.855 [2024-11-26 19:22:25.096118] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:26.855 [2024-11-26 19:22:25.096205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72141 ] 00:14:26.855 [2024-11-26 19:22:25.231886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.855 [2024-11-26 19:22:25.277601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.114 [2024-11-26 19:22:25.329954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.114 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.114 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:27.114 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPdljVs0ge 00:14:27.373 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:27.632 [2024-11-26 19:22:25.962214] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.632 nvme0n1 00:14:27.632 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.891 Running I/O for 1 seconds... 00:14:28.828 4352.00 IOPS, 17.00 MiB/s 00:14:28.828 Latency(us) 00:14:28.828 [2024-11-26T19:22:27.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.828 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:28.828 Verification LBA range: start 0x0 length 0x2000 00:14:28.828 nvme0n1 : 1.02 4374.35 17.09 0.00 0.00 28967.04 6345.08 18588.39 00:14:28.828 [2024-11-26T19:22:27.268Z] =================================================================================================================== 00:14:28.828 [2024-11-26T19:22:27.268Z] Total : 4374.35 17.09 0.00 0.00 28967.04 6345.08 18588.39 00:14:28.828 { 00:14:28.828 "results": [ 00:14:28.828 { 00:14:28.828 "job": "nvme0n1", 00:14:28.828 "core_mask": "0x2", 00:14:28.828 "workload": "verify", 00:14:28.828 "status": "finished", 00:14:28.828 "verify_range": { 00:14:28.828 "start": 0, 00:14:28.829 "length": 8192 00:14:28.829 }, 00:14:28.829 "queue_depth": 128, 00:14:28.829 "io_size": 4096, 00:14:28.829 "runtime": 1.024152, 00:14:28.829 "iops": 4374.3506823205935, 00:14:28.829 "mibps": 17.08730735281482, 00:14:28.829 "io_failed": 0, 00:14:28.829 "io_timeout": 0, 00:14:28.829 "avg_latency_us": 28967.044987012985, 00:14:28.829 "min_latency_us": 6345.076363636364, 00:14:28.829 "max_latency_us": 18588.392727272727 00:14:28.829 } 00:14:28.829 ], 00:14:28.829 "core_count": 1 00:14:28.829 } 00:14:28.829 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:28.829 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.829 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.088 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.088 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:29.088 "subsystems": [ 00:14:29.088 { 00:14:29.088 "subsystem": "keyring", 00:14:29.088 "config": [ 00:14:29.088 { 00:14:29.088 "method": "keyring_file_add_key", 00:14:29.088 "params": { 00:14:29.088 "name": "key0", 00:14:29.088 "path": "/tmp/tmp.aPdljVs0ge" 00:14:29.088 } 00:14:29.088 } 00:14:29.088 ] 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "subsystem": "iobuf", 00:14:29.088 "config": [ 00:14:29.088 { 00:14:29.088 "method": "iobuf_set_options", 00:14:29.088 "params": { 00:14:29.088 "small_pool_count": 8192, 00:14:29.088 "large_pool_count": 1024, 00:14:29.088 "small_bufsize": 8192, 00:14:29.088 "large_bufsize": 135168, 00:14:29.088 "enable_numa": false 00:14:29.088 } 00:14:29.088 } 00:14:29.088 ] 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "subsystem": "sock", 00:14:29.088 "config": [ 00:14:29.088 { 00:14:29.088 "method": "sock_set_default_impl", 00:14:29.088 "params": { 00:14:29.088 "impl_name": "uring" 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "sock_impl_set_options", 00:14:29.088 "params": { 00:14:29.088 "impl_name": "ssl", 00:14:29.088 "recv_buf_size": 4096, 00:14:29.088 "send_buf_size": 4096, 00:14:29.088 "enable_recv_pipe": true, 00:14:29.088 "enable_quickack": false, 00:14:29.088 "enable_placement_id": 0, 00:14:29.088 "enable_zerocopy_send_server": true, 00:14:29.088 "enable_zerocopy_send_client": false, 00:14:29.088 "zerocopy_threshold": 0, 00:14:29.088 "tls_version": 0, 00:14:29.088 "enable_ktls": false 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "sock_impl_set_options", 00:14:29.088 "params": { 00:14:29.088 "impl_name": "posix", 00:14:29.088 "recv_buf_size": 2097152, 00:14:29.088 "send_buf_size": 2097152, 00:14:29.088 "enable_recv_pipe": true, 00:14:29.088 "enable_quickack": false, 00:14:29.088 "enable_placement_id": 0, 00:14:29.088 "enable_zerocopy_send_server": true, 00:14:29.088 "enable_zerocopy_send_client": false, 00:14:29.088 "zerocopy_threshold": 0, 00:14:29.088 "tls_version": 0, 00:14:29.088 "enable_ktls": false 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "sock_impl_set_options", 00:14:29.088 "params": { 00:14:29.088 "impl_name": "uring", 00:14:29.088 "recv_buf_size": 2097152, 00:14:29.088 "send_buf_size": 2097152, 00:14:29.088 "enable_recv_pipe": true, 00:14:29.088 "enable_quickack": false, 00:14:29.088 "enable_placement_id": 0, 00:14:29.088 "enable_zerocopy_send_server": false, 00:14:29.088 "enable_zerocopy_send_client": false, 00:14:29.088 "zerocopy_threshold": 0, 00:14:29.088 "tls_version": 0, 00:14:29.088 "enable_ktls": false 00:14:29.088 } 00:14:29.088 } 00:14:29.088 ] 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "subsystem": "vmd", 00:14:29.088 "config": [] 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "subsystem": "accel", 00:14:29.088 "config": [ 00:14:29.088 { 00:14:29.088 "method": "accel_set_options", 00:14:29.088 "params": { 00:14:29.088 "small_cache_size": 128, 00:14:29.088 "large_cache_size": 16, 00:14:29.088 "task_count": 2048, 00:14:29.088 "sequence_count": 2048, 00:14:29.088 "buf_count": 2048 00:14:29.088 } 00:14:29.088 } 00:14:29.088 ] 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "subsystem": "bdev", 00:14:29.088 "config": [ 00:14:29.088 { 00:14:29.088 "method": "bdev_set_options", 00:14:29.088 "params": { 00:14:29.088 "bdev_io_pool_size": 65535, 00:14:29.088 "bdev_io_cache_size": 256, 00:14:29.088 "bdev_auto_examine": true, 00:14:29.088 "iobuf_small_cache_size": 128, 00:14:29.088 "iobuf_large_cache_size": 16 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "bdev_raid_set_options", 00:14:29.088 "params": { 00:14:29.088 "process_window_size_kb": 1024, 00:14:29.088 "process_max_bandwidth_mb_sec": 0 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "bdev_iscsi_set_options", 00:14:29.088 "params": { 00:14:29.088 "timeout_sec": 30 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "bdev_nvme_set_options", 00:14:29.088 "params": { 00:14:29.088 "action_on_timeout": "none", 00:14:29.088 "timeout_us": 0, 00:14:29.088 "timeout_admin_us": 0, 00:14:29.088 "keep_alive_timeout_ms": 10000, 00:14:29.088 "arbitration_burst": 0, 00:14:29.088 "low_priority_weight": 0, 00:14:29.088 "medium_priority_weight": 0, 00:14:29.088 "high_priority_weight": 0, 00:14:29.088 "nvme_adminq_poll_period_us": 10000, 00:14:29.088 "nvme_ioq_poll_period_us": 0, 00:14:29.088 "io_queue_requests": 0, 00:14:29.088 "delay_cmd_submit": true, 00:14:29.088 "transport_retry_count": 4, 00:14:29.088 "bdev_retry_count": 3, 00:14:29.088 "transport_ack_timeout": 0, 00:14:29.088 "ctrlr_loss_timeout_sec": 0, 00:14:29.088 "reconnect_delay_sec": 0, 00:14:29.088 "fast_io_fail_timeout_sec": 0, 00:14:29.088 "disable_auto_failback": false, 00:14:29.088 "generate_uuids": false, 00:14:29.088 "transport_tos": 0, 00:14:29.088 "nvme_error_stat": false, 00:14:29.088 "rdma_srq_size": 0, 00:14:29.088 "io_path_stat": false, 00:14:29.088 "allow_accel_sequence": false, 00:14:29.088 "rdma_max_cq_size": 0, 00:14:29.088 "rdma_cm_event_timeout_ms": 0, 00:14:29.088 "dhchap_digests": [ 00:14:29.088 "sha256", 00:14:29.088 "sha384", 00:14:29.088 "sha512" 00:14:29.088 ], 00:14:29.088 "dhchap_dhgroups": [ 00:14:29.088 "null", 00:14:29.088 "ffdhe2048", 00:14:29.088 "ffdhe3072", 00:14:29.088 "ffdhe4096", 00:14:29.088 "ffdhe6144", 00:14:29.088 "ffdhe8192" 00:14:29.088 ] 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "bdev_nvme_set_hotplug", 00:14:29.088 "params": { 00:14:29.088 "period_us": 100000, 00:14:29.088 "enable": false 00:14:29.088 } 00:14:29.088 }, 00:14:29.088 { 00:14:29.088 "method": "bdev_malloc_create", 00:14:29.088 "params": { 00:14:29.088 "name": "malloc0", 00:14:29.088 "num_blocks": 8192, 00:14:29.088 "block_size": 4096, 00:14:29.089 "physical_block_size": 4096, 00:14:29.089 "uuid": "097f12b5-c95f-410a-9e9b-b6aadccfe14a", 00:14:29.089 "optimal_io_boundary": 0, 00:14:29.089 "md_size": 0, 00:14:29.089 "dif_type": 0, 00:14:29.089 "dif_is_head_of_md": false, 00:14:29.089 "dif_pi_format": 0 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "bdev_wait_for_examine" 00:14:29.089 } 00:14:29.089 ] 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "subsystem": "nbd", 00:14:29.089 "config": [] 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "subsystem": "scheduler", 00:14:29.089 "config": [ 00:14:29.089 { 00:14:29.089 "method": "framework_set_scheduler", 00:14:29.089 "params": { 00:14:29.089 "name": "static" 00:14:29.089 } 00:14:29.089 } 00:14:29.089 ] 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "subsystem": "nvmf", 00:14:29.089 "config": [ 00:14:29.089 { 00:14:29.089 "method": "nvmf_set_config", 00:14:29.089 "params": { 00:14:29.089 "discovery_filter": "match_any", 00:14:29.089 "admin_cmd_passthru": { 00:14:29.089 "identify_ctrlr": false 00:14:29.089 }, 00:14:29.089 "dhchap_digests": [ 00:14:29.089 "sha256", 00:14:29.089 "sha384", 00:14:29.089 "sha512" 00:14:29.089 ], 00:14:29.089 "dhchap_dhgroups": [ 00:14:29.089 "null", 00:14:29.089 "ffdhe2048", 00:14:29.089 "ffdhe3072", 00:14:29.089 "ffdhe4096", 00:14:29.089 "ffdhe6144", 00:14:29.089 "ffdhe8192" 00:14:29.089 ] 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_set_max_subsystems", 00:14:29.089 "params": { 00:14:29.089 "max_subsystems": 1024 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_set_crdt", 00:14:29.089 "params": { 00:14:29.089 "crdt1": 0, 00:14:29.089 "crdt2": 0, 00:14:29.089 "crdt3": 0 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_create_transport", 00:14:29.089 "params": { 00:14:29.089 "trtype": "TCP", 00:14:29.089 "max_queue_depth": 128, 00:14:29.089 "max_io_qpairs_per_ctrlr": 127, 00:14:29.089 "in_capsule_data_size": 4096, 00:14:29.089 "max_io_size": 131072, 00:14:29.089 "io_unit_size": 131072, 00:14:29.089 "max_aq_depth": 128, 00:14:29.089 "num_shared_buffers": 511, 00:14:29.089 "buf_cache_size": 4294967295, 00:14:29.089 "dif_insert_or_strip": false, 00:14:29.089 "zcopy": false, 00:14:29.089 "c2h_success": false, 00:14:29.089 "sock_priority": 0, 00:14:29.089 "abort_timeout_sec": 1, 00:14:29.089 "ack_timeout": 0, 00:14:29.089 "data_wr_pool_size": 0 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_create_subsystem", 00:14:29.089 "params": { 00:14:29.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.089 "allow_any_host": false, 00:14:29.089 "serial_number": "00000000000000000000", 00:14:29.089 "model_number": "SPDK bdev Controller", 00:14:29.089 "max_namespaces": 32, 00:14:29.089 "min_cntlid": 1, 00:14:29.089 "max_cntlid": 65519, 00:14:29.089 "ana_reporting": false 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_subsystem_add_host", 00:14:29.089 "params": { 00:14:29.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.089 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.089 "psk": "key0" 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_subsystem_add_ns", 00:14:29.089 "params": { 00:14:29.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.089 "namespace": { 00:14:29.089 "nsid": 1, 00:14:29.089 "bdev_name": "malloc0", 00:14:29.089 "nguid": "097F12B5C95F410A9E9BB6AADCCFE14A", 00:14:29.089 "uuid": "097f12b5-c95f-410a-9e9b-b6aadccfe14a", 00:14:29.089 "no_auto_visible": false 00:14:29.089 } 00:14:29.089 } 00:14:29.089 }, 00:14:29.089 { 00:14:29.089 "method": "nvmf_subsystem_add_listener", 00:14:29.089 "params": { 00:14:29.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.089 "listen_address": { 00:14:29.089 "trtype": "TCP", 00:14:29.089 "adrfam": "IPv4", 00:14:29.089 "traddr": "10.0.0.3", 00:14:29.089 "trsvcid": "4420" 00:14:29.089 }, 00:14:29.089 "secure_channel": false, 00:14:29.089 "sock_impl": "ssl" 00:14:29.089 } 00:14:29.089 } 00:14:29.089 ] 00:14:29.089 } 00:14:29.089 ] 00:14:29.089 }' 00:14:29.089 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:29.349 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:29.349 "subsystems": [ 00:14:29.349 { 00:14:29.349 "subsystem": "keyring", 00:14:29.349 "config": [ 00:14:29.349 { 00:14:29.349 "method": "keyring_file_add_key", 00:14:29.349 "params": { 00:14:29.349 "name": "key0", 00:14:29.349 "path": "/tmp/tmp.aPdljVs0ge" 00:14:29.349 } 00:14:29.349 } 00:14:29.349 ] 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "subsystem": "iobuf", 00:14:29.349 "config": [ 00:14:29.349 { 00:14:29.349 "method": "iobuf_set_options", 00:14:29.349 "params": { 00:14:29.349 "small_pool_count": 8192, 00:14:29.349 "large_pool_count": 1024, 00:14:29.349 "small_bufsize": 8192, 00:14:29.349 "large_bufsize": 135168, 00:14:29.349 "enable_numa": false 00:14:29.349 } 00:14:29.349 } 00:14:29.349 ] 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "subsystem": "sock", 00:14:29.349 "config": [ 00:14:29.349 { 00:14:29.349 "method": "sock_set_default_impl", 00:14:29.349 "params": { 00:14:29.349 "impl_name": "uring" 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "sock_impl_set_options", 00:14:29.349 "params": { 00:14:29.349 "impl_name": "ssl", 00:14:29.349 "recv_buf_size": 4096, 00:14:29.349 "send_buf_size": 4096, 00:14:29.349 "enable_recv_pipe": true, 00:14:29.349 "enable_quickack": false, 00:14:29.349 "enable_placement_id": 0, 00:14:29.349 "enable_zerocopy_send_server": true, 00:14:29.349 "enable_zerocopy_send_client": false, 00:14:29.349 "zerocopy_threshold": 0, 00:14:29.349 "tls_version": 0, 00:14:29.349 "enable_ktls": false 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "sock_impl_set_options", 00:14:29.349 "params": { 00:14:29.349 "impl_name": "posix", 00:14:29.349 "recv_buf_size": 2097152, 00:14:29.349 "send_buf_size": 2097152, 00:14:29.349 "enable_recv_pipe": true, 00:14:29.349 "enable_quickack": false, 00:14:29.349 "enable_placement_id": 0, 00:14:29.349 "enable_zerocopy_send_server": true, 00:14:29.349 "enable_zerocopy_send_client": false, 00:14:29.349 "zerocopy_threshold": 0, 00:14:29.349 "tls_version": 0, 00:14:29.349 "enable_ktls": false 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "sock_impl_set_options", 00:14:29.349 "params": { 00:14:29.349 "impl_name": "uring", 00:14:29.349 "recv_buf_size": 2097152, 00:14:29.349 "send_buf_size": 2097152, 00:14:29.349 "enable_recv_pipe": true, 00:14:29.349 "enable_quickack": false, 00:14:29.349 "enable_placement_id": 0, 00:14:29.349 "enable_zerocopy_send_server": false, 00:14:29.349 "enable_zerocopy_send_client": false, 00:14:29.349 "zerocopy_threshold": 0, 00:14:29.349 "tls_version": 0, 00:14:29.349 "enable_ktls": false 00:14:29.349 } 00:14:29.349 } 00:14:29.349 ] 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "subsystem": "vmd", 00:14:29.349 "config": [] 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "subsystem": "accel", 00:14:29.349 "config": [ 00:14:29.349 { 00:14:29.349 "method": "accel_set_options", 00:14:29.349 "params": { 00:14:29.349 "small_cache_size": 128, 00:14:29.349 "large_cache_size": 16, 00:14:29.349 "task_count": 2048, 00:14:29.349 "sequence_count": 2048, 00:14:29.349 "buf_count": 2048 00:14:29.349 } 00:14:29.349 } 00:14:29.349 ] 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "subsystem": "bdev", 00:14:29.349 "config": [ 00:14:29.349 { 00:14:29.349 "method": "bdev_set_options", 00:14:29.349 "params": { 00:14:29.349 "bdev_io_pool_size": 65535, 00:14:29.349 "bdev_io_cache_size": 256, 00:14:29.349 "bdev_auto_examine": true, 00:14:29.349 "iobuf_small_cache_size": 128, 00:14:29.349 "iobuf_large_cache_size": 16 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "bdev_raid_set_options", 00:14:29.349 "params": { 00:14:29.349 "process_window_size_kb": 1024, 00:14:29.349 "process_max_bandwidth_mb_sec": 0 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "bdev_iscsi_set_options", 00:14:29.349 "params": { 00:14:29.349 "timeout_sec": 30 00:14:29.349 } 00:14:29.349 }, 00:14:29.349 { 00:14:29.349 "method": "bdev_nvme_set_options", 00:14:29.349 "params": { 00:14:29.349 "action_on_timeout": "none", 00:14:29.349 "timeout_us": 0, 00:14:29.349 "timeout_admin_us": 0, 00:14:29.349 "keep_alive_timeout_ms": 10000, 00:14:29.349 "arbitration_burst": 0, 00:14:29.349 "low_priority_weight": 0, 00:14:29.349 "medium_priority_weight": 0, 00:14:29.349 "high_priority_weight": 0, 00:14:29.349 "nvme_adminq_poll_period_us": 10000, 00:14:29.349 "nvme_ioq_poll_period_us": 0, 00:14:29.349 "io_queue_requests": 512, 00:14:29.349 "delay_cmd_submit": true, 00:14:29.350 "transport_retry_count": 4, 00:14:29.350 "bdev_retry_count": 3, 00:14:29.350 "transport_ack_timeout": 0, 00:14:29.350 "ctrlr_loss_timeout_sec": 0, 00:14:29.350 "reconnect_delay_sec": 0, 00:14:29.350 "fast_io_fail_timeout_sec": 0, 00:14:29.350 "disable_auto_failback": false, 00:14:29.350 "generate_uuids": false, 00:14:29.350 "transport_tos": 0, 00:14:29.350 "nvme_error_stat": false, 00:14:29.350 "rdma_srq_size": 0, 00:14:29.350 "io_path_stat": false, 00:14:29.350 "allow_accel_sequence": false, 00:14:29.350 "rdma_max_cq_size": 0, 00:14:29.350 "rdma_cm_event_timeout_ms": 0, 00:14:29.350 "dhchap_digests": [ 00:14:29.350 "sha256", 00:14:29.350 "sha384", 00:14:29.350 "sha512" 00:14:29.350 ], 00:14:29.350 "dhchap_dhgroups": [ 00:14:29.350 "null", 00:14:29.350 "ffdhe2048", 00:14:29.350 "ffdhe3072", 00:14:29.350 "ffdhe4096", 00:14:29.350 "ffdhe6144", 00:14:29.350 "ffdhe8192" 00:14:29.350 ] 00:14:29.350 } 00:14:29.350 }, 00:14:29.350 { 00:14:29.350 "method": "bdev_nvme_attach_controller", 00:14:29.350 "params": { 00:14:29.350 "name": "nvme0", 00:14:29.350 "trtype": "TCP", 00:14:29.350 "adrfam": "IPv4", 00:14:29.350 "traddr": "10.0.0.3", 00:14:29.350 "trsvcid": "4420", 00:14:29.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.350 "prchk_reftag": false, 00:14:29.350 "prchk_guard": false, 00:14:29.350 "ctrlr_loss_timeout_sec": 0, 00:14:29.350 "reconnect_delay_sec": 0, 00:14:29.350 "fast_io_fail_timeout_sec": 0, 00:14:29.350 "psk": "key0", 00:14:29.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:29.350 "hdgst": false, 00:14:29.350 "ddgst": false, 00:14:29.350 "multipath": "multipath" 00:14:29.350 } 00:14:29.350 }, 00:14:29.350 { 00:14:29.350 "method": "bdev_nvme_set_hotplug", 00:14:29.350 "params": { 00:14:29.350 "period_us": 100000, 00:14:29.350 "enable": false 00:14:29.350 } 00:14:29.350 }, 00:14:29.350 { 00:14:29.350 "method": "bdev_enable_histogram", 00:14:29.350 "params": { 00:14:29.350 "name": "nvme0n1", 00:14:29.350 "enable": true 00:14:29.350 } 00:14:29.350 }, 00:14:29.350 { 00:14:29.350 "method": "bdev_wait_for_examine" 00:14:29.350 } 00:14:29.350 ] 00:14:29.350 }, 00:14:29.350 { 00:14:29.350 "subsystem": "nbd", 00:14:29.350 "config": [] 00:14:29.350 } 00:14:29.350 ] 00:14:29.350 }' 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72141 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72141 ']' 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72141 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72141 00:14:29.350 killing process with pid 72141 00:14:29.350 Received shutdown signal, test time was about 1.000000 seconds 00:14:29.350 00:14:29.350 Latency(us) 00:14:29.350 [2024-11-26T19:22:27.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.350 [2024-11-26T19:22:27.790Z] =================================================================================================================== 00:14:29.350 [2024-11-26T19:22:27.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72141' 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72141 00:14:29.350 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72141 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72117 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72117 ']' 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72117 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72117 00:14:29.610 killing process with pid 72117 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72117' 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72117 00:14:29.610 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72117 00:14:29.869 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:29.869 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.869 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.869 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:29.869 "subsystems": [ 00:14:29.869 { 00:14:29.869 "subsystem": "keyring", 00:14:29.869 "config": [ 00:14:29.869 { 00:14:29.869 "method": "keyring_file_add_key", 00:14:29.869 "params": { 00:14:29.869 "name": "key0", 00:14:29.869 "path": "/tmp/tmp.aPdljVs0ge" 00:14:29.869 } 00:14:29.869 } 00:14:29.869 ] 00:14:29.869 }, 00:14:29.869 { 00:14:29.869 "subsystem": "iobuf", 00:14:29.869 "config": [ 00:14:29.869 { 00:14:29.869 "method": "iobuf_set_options", 00:14:29.869 "params": { 00:14:29.869 "small_pool_count": 8192, 00:14:29.869 "large_pool_count": 1024, 00:14:29.869 "small_bufsize": 8192, 00:14:29.869 "large_bufsize": 135168, 00:14:29.869 "enable_numa": false 00:14:29.869 } 00:14:29.869 } 00:14:29.869 ] 00:14:29.869 }, 00:14:29.869 { 00:14:29.869 "subsystem": "sock", 00:14:29.869 "config": [ 00:14:29.869 { 00:14:29.869 "method": "sock_set_default_impl", 00:14:29.869 "params": { 00:14:29.869 "impl_name": "uring" 00:14:29.869 } 00:14:29.869 }, 00:14:29.869 { 00:14:29.869 "method": "sock_impl_set_options", 00:14:29.869 "params": { 00:14:29.869 "impl_name": "ssl", 00:14:29.869 "recv_buf_size": 4096, 00:14:29.869 "send_buf_size": 4096, 00:14:29.869 "enable_recv_pipe": true, 00:14:29.869 "enable_quickack": false, 00:14:29.869 "enable_placement_id": 0, 00:14:29.869 "enable_zerocopy_send_server": true, 00:14:29.869 "enable_zerocopy_send_client": false, 00:14:29.869 "zerocopy_threshold": 0, 00:14:29.869 "tls_version": 0, 00:14:29.869 "enable_ktls": false 00:14:29.869 } 00:14:29.869 }, 00:14:29.869 { 00:14:29.870 "method": "sock_impl_set_options", 00:14:29.870 "params": { 00:14:29.870 "impl_name": "posix", 00:14:29.870 "recv_buf_size": 2097152, 00:14:29.870 "send_buf_size": 2097152, 00:14:29.870 "enable_recv_pipe": true, 00:14:29.870 "enable_quickack": false, 00:14:29.870 "enable_placement_id": 0, 00:14:29.870 "enable_zerocopy_send_server": true, 00:14:29.870 "enable_zerocopy_send_client": false, 00:14:29.870 "zerocopy_threshold": 0, 00:14:29.870 "tls_version": 0, 00:14:29.870 "enable_ktls": false 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "sock_impl_set_options", 00:14:29.870 "params": { 00:14:29.870 "impl_name": "uring", 00:14:29.870 "recv_buf_size": 2097152, 00:14:29.870 "send_buf_size": 2097152, 00:14:29.870 "enable_recv_pipe": true, 00:14:29.870 "enable_quickack": false, 00:14:29.870 "enable_placement_id": 0, 00:14:29.870 "enable_zerocopy_send_server": false, 00:14:29.870 "enable_zerocopy_send_client": false, 00:14:29.870 "zerocopy_threshold": 0, 00:14:29.870 "tls_version": 0, 00:14:29.870 "enable_ktls": false 00:14:29.870 } 00:14:29.870 } 00:14:29.870 ] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "vmd", 00:14:29.870 "config": [] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "accel", 00:14:29.870 "config": [ 00:14:29.870 { 00:14:29.870 "method": "accel_set_options", 00:14:29.870 "params": { 00:14:29.870 "small_cache_size": 128, 00:14:29.870 "large_cache_size": 16, 00:14:29.870 "task_count": 2048, 00:14:29.870 "sequence_count": 2048, 00:14:29.870 "buf_count": 2048 00:14:29.870 } 00:14:29.870 } 00:14:29.870 ] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "bdev", 00:14:29.870 "config": [ 00:14:29.870 { 00:14:29.870 "method": "bdev_set_options", 00:14:29.870 "params": { 00:14:29.870 "bdev_io_pool_size": 65535, 00:14:29.870 "bdev_io_cache_size": 256, 00:14:29.870 "bdev_auto_examine": true, 00:14:29.870 "iobuf_small_cache_size": 128, 00:14:29.870 "iobuf_large_cache_size": 16 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_raid_set_options", 00:14:29.870 "params": { 00:14:29.870 "process_window_size_kb": 1024, 00:14:29.870 "process_max_bandwidth_mb_sec": 0 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_iscsi_set_options", 00:14:29.870 "params": { 00:14:29.870 "timeout_sec": 30 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_nvme_set_options", 00:14:29.870 "params": { 00:14:29.870 "action_on_timeout": "none", 00:14:29.870 "timeout_us": 0, 00:14:29.870 "timeout_admin_us": 0, 00:14:29.870 "keep_alive_timeout_ms": 10000, 00:14:29.870 "arbitration_burst": 0, 00:14:29.870 "low_priority_weight": 0, 00:14:29.870 "medium_priority_weight": 0, 00:14:29.870 "high_priority_weight": 0, 00:14:29.870 "nvme_adminq_poll_period_us": 10000, 00:14:29.870 "nvme_ioq_poll_period_us": 0, 00:14:29.870 "io_queue_requests": 0, 00:14:29.870 "delay_cmd_submit": true, 00:14:29.870 "transport_retry_count": 4, 00:14:29.870 "bdev_retry_count": 3, 00:14:29.870 "transport_ack_timeout": 0, 00:14:29.870 "ctrlr_loss_timeout_sec": 0, 00:14:29.870 "reconnect_delay_sec": 0, 00:14:29.870 "fast_io_fail_timeout_sec": 0, 00:14:29.870 "disable_auto_failback": false, 00:14:29.870 "generate_uuids": false, 00:14:29.870 "transport_tos": 0, 00:14:29.870 "nvme_error_stat": false, 00:14:29.870 "rdma_srq_size": 0, 00:14:29.870 "io_path_stat": false, 00:14:29.870 "allow_accel_sequence": false, 00:14:29.870 "rdma_max_cq_size": 0, 00:14:29.870 "rdma_cm_event_timeout_ms": 0, 00:14:29.870 "dhchap_digests": [ 00:14:29.870 "sha256", 00:14:29.870 "sha384", 00:14:29.870 "sha512" 00:14:29.870 ], 00:14:29.870 "dhchap_dhgroups": [ 00:14:29.870 "null", 00:14:29.870 "ffdhe2048", 00:14:29.870 "ffdhe3072", 00:14:29.870 "ffdhe4096", 00:14:29.870 "ffdhe6144", 00:14:29.870 "ffdhe8192" 00:14:29.870 ] 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_nvme_set_hotplug", 00:14:29.870 "params": { 00:14:29.870 "period_us": 100000, 00:14:29.870 "enable": false 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_malloc_create", 00:14:29.870 "params": { 00:14:29.870 "name": "malloc0", 00:14:29.870 "num_blocks": 8192, 00:14:29.870 "block_size": 4096, 00:14:29.870 "physical_block_size": 4096, 00:14:29.870 "uuid": "097f12b5-c95f-410a-9e9b-b6aadccfe14a", 00:14:29.870 "optimal_io_boundary": 0, 00:14:29.870 "md_size": 0, 00:14:29.870 "dif_type": 0, 00:14:29.870 "dif_is_head_of_md": false, 00:14:29.870 "dif_pi_format": 0 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "bdev_wait_for_examine" 00:14:29.870 } 00:14:29.870 ] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "nbd", 00:14:29.870 "config": [] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "scheduler", 00:14:29.870 "config": [ 00:14:29.870 { 00:14:29.870 "method": "framework_set_scheduler", 00:14:29.870 "params": { 00:14:29.870 "name": "static" 00:14:29.870 } 00:14:29.870 } 00:14:29.870 ] 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "subsystem": "nvmf", 00:14:29.870 "config": [ 00:14:29.870 { 00:14:29.870 "method": "nvmf_set_config", 00:14:29.870 "params": { 00:14:29.870 "discovery_filter": "match_any", 00:14:29.870 "admin_cmd_passthru": { 00:14:29.870 "identify_ctrlr": false 00:14:29.870 }, 00:14:29.870 "dhchap_digests": [ 00:14:29.870 "sha256", 00:14:29.870 "sha384", 00:14:29.870 "sha512" 00:14:29.870 ], 00:14:29.870 "dhchap_dhgroups": [ 00:14:29.870 "null", 00:14:29.870 "ffdhe2048", 00:14:29.870 "ffdhe3072", 00:14:29.870 "ffdhe4096", 00:14:29.870 "ffdhe6144", 00:14:29.870 "ffdhe8192" 00:14:29.870 ] 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "nvmf_set_max_subsystems", 00:14:29.870 "params": { 00:14:29.870 "max_subsystems": 1024 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "nvmf_set_crdt", 00:14:29.870 "params": { 00:14:29.870 "crdt1": 0, 00:14:29.870 "crdt2": 0, 00:14:29.870 "crdt3": 0 00:14:29.870 } 00:14:29.870 }, 00:14:29.870 { 00:14:29.870 "method": "nvmf_create_transport", 00:14:29.870 "params": { 00:14:29.870 "trtype": "TCP", 00:14:29.870 "max_queue_depth": 128, 00:14:29.871 "max_io_qpairs_per_ctrlr": 127, 00:14:29.871 "in_capsule_data_size": 4096, 00:14:29.871 "max_io_size": 131072, 00:14:29.871 "io_unit_size": 131072, 00:14:29.871 "max_aq_depth": 128, 00:14:29.871 "num_shared_buffers": 511, 00:14:29.871 "buf_cache_size": 4294967295, 00:14:29.871 "dif_insert_or_strip": false, 00:14:29.871 "zcopy": false, 00:14:29.871 "c2h_success": false, 00:14:29.871 "sock_priority": 0, 00:14:29.871 "abort_timeout_sec": 1, 00:14:29.871 "ack_timeout": 0, 00:14:29.871 "data_wr_pool_size": 0 00:14:29.871 } 00:14:29.871 }, 00:14:29.871 { 00:14:29.871 "method": "nvmf_create_subsystem", 00:14:29.871 "params": { 00:14:29.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.871 "allow_any_host": false, 00:14:29.871 "serial_number": "00000000000000000000", 00:14:29.871 "model_number": "SPDK bdev Controller", 00:14:29.871 "max_namespaces": 32, 00:14:29.871 "min_cntlid": 1, 00:14:29.871 "max_cntlid": 65519, 00:14:29.871 "ana_reporting": false 00:14:29.871 } 00:14:29.871 }, 00:14:29.871 { 00:14:29.871 "method": "nvmf_subsystem_add_host", 00:14:29.871 "params": { 00:14:29.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.871 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.871 "psk": "key0" 00:14:29.871 } 00:14:29.871 }, 00:14:29.871 { 00:14:29.871 "method": "nvmf_subsystem_add_ns", 00:14:29.871 "params": { 00:14:29.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.871 "namespace": { 00:14:29.871 "nsid": 1, 00:14:29.871 "bdev_name": "malloc0", 00:14:29.871 "nguid": "097F12B5C95F410A9E9BB6AADCCFE14A", 00:14:29.871 "uuid": "097f12b5-c95f-410a-9e9b-b6aadccfe14a", 00:14:29.871 "no_auto_visible": false 00:14:29.871 } 00:14:29.871 } 00:14:29.871 }, 00:14:29.871 { 00:14:29.871 "method": "nvmf_subsystem_add_listener", 00:14:29.871 "params": { 00:14:29.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.871 "listen_address": { 00:14:29.871 "trtype": "TCP", 00:14:29.871 "adrfam": "IPv4", 00:14:29.871 "traddr": "10.0.0.3", 00:14:29.871 "trsvcid": "4420" 00:14:29.871 }, 00:14:29.871 "secure_channel": false, 00:14:29.871 "sock_impl": "ssl" 00:14:29.871 } 00:14:29.871 } 00:14:29.871 ] 00:14:29.871 } 00:14:29.871 ] 00:14:29.871 }' 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72192 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72192 00:14:29.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72192 ']' 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.871 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 [2024-11-26 19:22:28.219506] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:29.871 [2024-11-26 19:22:28.219596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.131 [2024-11-26 19:22:28.358840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.131 [2024-11-26 19:22:28.403229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.131 [2024-11-26 19:22:28.403285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.131 [2024-11-26 19:22:28.403312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.131 [2024-11-26 19:22:28.403334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.131 [2024-11-26 19:22:28.403341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.131 [2024-11-26 19:22:28.403766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.391 [2024-11-26 19:22:28.569845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.391 [2024-11-26 19:22:28.646380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.391 [2024-11-26 19:22:28.678341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.391 [2024-11-26 19:22:28.678546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72227 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72227 /var/tmp/bdevperf.sock 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72227 ']' 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:30.967 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:30.967 "subsystems": [ 00:14:30.967 { 00:14:30.967 "subsystem": "keyring", 00:14:30.967 "config": [ 00:14:30.967 { 00:14:30.967 "method": "keyring_file_add_key", 00:14:30.967 "params": { 00:14:30.967 "name": "key0", 00:14:30.967 "path": "/tmp/tmp.aPdljVs0ge" 00:14:30.967 } 00:14:30.967 } 00:14:30.967 ] 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "subsystem": "iobuf", 00:14:30.967 "config": [ 00:14:30.967 { 00:14:30.967 "method": "iobuf_set_options", 00:14:30.967 "params": { 00:14:30.967 "small_pool_count": 8192, 00:14:30.967 "large_pool_count": 1024, 00:14:30.967 "small_bufsize": 8192, 00:14:30.967 "large_bufsize": 135168, 00:14:30.967 "enable_numa": false 00:14:30.967 } 00:14:30.967 } 00:14:30.967 ] 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "subsystem": "sock", 00:14:30.967 "config": [ 00:14:30.967 { 00:14:30.967 "method": "sock_set_default_impl", 00:14:30.967 "params": { 00:14:30.967 "impl_name": "uring" 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "sock_impl_set_options", 00:14:30.967 "params": { 00:14:30.967 "impl_name": "ssl", 00:14:30.967 "recv_buf_size": 4096, 00:14:30.967 "send_buf_size": 4096, 00:14:30.967 "enable_recv_pipe": true, 00:14:30.967 "enable_quickack": false, 00:14:30.967 "enable_placement_id": 0, 00:14:30.967 "enable_zerocopy_send_server": true, 00:14:30.967 "enable_zerocopy_send_client": false, 00:14:30.967 "zerocopy_threshold": 0, 00:14:30.967 "tls_version": 0, 00:14:30.967 "enable_ktls": false 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "sock_impl_set_options", 00:14:30.967 "params": { 00:14:30.967 "impl_name": "posix", 00:14:30.967 "recv_buf_size": 2097152, 00:14:30.967 "send_buf_size": 2097152, 00:14:30.967 "enable_recv_pipe": true, 00:14:30.967 "enable_quickack": false, 00:14:30.967 "enable_placement_id": 0, 00:14:30.967 "enable_zerocopy_send_server": true, 00:14:30.967 "enable_zerocopy_send_client": false, 00:14:30.967 "zerocopy_threshold": 0, 00:14:30.967 "tls_version": 0, 00:14:30.967 "enable_ktls": false 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "sock_impl_set_options", 00:14:30.967 "params": { 00:14:30.967 "impl_name": "uring", 00:14:30.967 "recv_buf_size": 2097152, 00:14:30.967 "send_buf_size": 2097152, 00:14:30.967 "enable_recv_pipe": true, 00:14:30.967 "enable_quickack": false, 00:14:30.967 "enable_placement_id": 0, 00:14:30.967 "enable_zerocopy_send_server": false, 00:14:30.967 "enable_zerocopy_send_client": false, 00:14:30.967 "zerocopy_threshold": 0, 00:14:30.967 "tls_version": 0, 00:14:30.967 "enable_ktls": false 00:14:30.967 } 00:14:30.967 } 00:14:30.967 ] 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "subsystem": "vmd", 00:14:30.967 "config": [] 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "subsystem": "accel", 00:14:30.967 "config": [ 00:14:30.967 { 00:14:30.967 "method": "accel_set_options", 00:14:30.967 "params": { 00:14:30.967 "small_cache_size": 128, 00:14:30.967 "large_cache_size": 16, 00:14:30.967 "task_count": 2048, 00:14:30.967 "sequence_count": 2048, 00:14:30.967 "buf_count": 2048 00:14:30.967 } 00:14:30.967 } 00:14:30.967 ] 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "subsystem": "bdev", 00:14:30.967 "config": [ 00:14:30.967 { 00:14:30.967 "method": "bdev_set_options", 00:14:30.967 "params": { 00:14:30.967 "bdev_io_pool_size": 65535, 00:14:30.967 "bdev_io_cache_size": 256, 00:14:30.967 "bdev_auto_examine": true, 00:14:30.967 "iobuf_small_cache_size": 128, 00:14:30.967 "iobuf_large_cache_size": 16 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "bdev_raid_set_options", 00:14:30.967 "params": { 00:14:30.967 "process_window_size_kb": 1024, 00:14:30.967 "process_max_bandwidth_mb_sec": 0 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "bdev_iscsi_set_options", 00:14:30.967 "params": { 00:14:30.967 "timeout_sec": 30 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "bdev_nvme_set_options", 00:14:30.967 "params": { 00:14:30.967 "action_on_timeout": "none", 00:14:30.967 "timeout_us": 0, 00:14:30.967 "timeout_admin_us": 0, 00:14:30.967 "keep_alive_timeout_ms": 10000, 00:14:30.967 "arbitration_burst": 0, 00:14:30.967 "low_priority_weight": 0, 00:14:30.967 "medium_priority_weight": 0, 00:14:30.967 "high_priority_weight": 0, 00:14:30.967 "nvme_adminq_poll_period_us": 10000, 00:14:30.967 "nvme_ioq_poll_period_us": 0, 00:14:30.967 "io_queue_requests": 512, 00:14:30.967 "delay_cmd_submit": true, 00:14:30.967 "transport_retry_count": 4, 00:14:30.967 "bdev_retry_count": 3, 00:14:30.967 "transport_ack_timeout": 0, 00:14:30.967 "ctrlr_loss_timeout_sec": 0, 00:14:30.967 "reconnect_delay_sec": 0, 00:14:30.967 "fast_io_fail_timeout_sec": 0, 00:14:30.967 "disable_auto_failback": false, 00:14:30.967 "generate_uuids": false, 00:14:30.967 "transport_tos": 0, 00:14:30.967 "nvme_error_stat": false, 00:14:30.967 "rdma_srq_size": 0, 00:14:30.967 "io_path_stat": false, 00:14:30.967 "allow_accel_sequence": false, 00:14:30.967 "rdma_max_cq_size": 0, 00:14:30.967 "rdma_cm_event_timeout_ms": 0, 00:14:30.967 "dhchap_digests": [ 00:14:30.967 "sha256", 00:14:30.967 "sha384", 00:14:30.967 "sha512" 00:14:30.967 ], 00:14:30.967 "dhchap_dhgroups": [ 00:14:30.967 "null", 00:14:30.967 "ffdhe2048", 00:14:30.967 "ffdhe3072", 00:14:30.967 "ffdhe4096", 00:14:30.967 "ffdhe6144", 00:14:30.967 "ffdhe8192" 00:14:30.967 ] 00:14:30.967 } 00:14:30.967 }, 00:14:30.967 { 00:14:30.967 "method": "bdev_nvme_attach_controller", 00:14:30.968 "params": { 00:14:30.968 "name": "nvme0", 00:14:30.968 "trtype": "TCP", 00:14:30.968 "adrfam": "IPv4", 00:14:30.968 "traddr": "10.0.0.3", 00:14:30.968 "trsvcid": "4420", 00:14:30.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.968 "prchk_reftag": false, 00:14:30.968 "prchk_guard": false, 00:14:30.968 "ctrlr_loss_timeout_sec": 0, 00:14:30.968 "reconnect_delay_sec": 0, 00:14:30.968 "fast_io_fail_timeout_sec": 0, 00:14:30.968 "psk": "key0", 00:14:30.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.968 "hdgst": false, 00:14:30.968 "ddgst": false, 00:14:30.968 "multipath": "multipath" 00:14:30.968 } 00:14:30.968 }, 00:14:30.968 { 00:14:30.968 "method": "bdev_nvme_set_hotplug", 00:14:30.968 "params": { 00:14:30.968 "period_us": 100000, 00:14:30.968 "enable": false 00:14:30.968 } 00:14:30.968 }, 00:14:30.968 { 00:14:30.968 "method": "bdev_enable_histogram", 00:14:30.968 "params": { 00:14:30.968 "name": "nvme0n1", 00:14:30.968 "enable": true 00:14:30.968 } 00:14:30.968 }, 00:14:30.968 { 00:14:30.968 "method": "bdev_wait_for_examine" 00:14:30.968 } 00:14:30.968 ] 00:14:30.968 }, 00:14:30.968 { 00:14:30.968 "subsystem": "nbd", 00:14:30.968 "config": [] 00:14:30.968 } 00:14:30.968 ] 00:14:30.968 }' 00:14:30.968 [2024-11-26 19:22:29.298889] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:30.968 [2024-11-26 19:22:29.299230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72227 ] 00:14:31.226 [2024-11-26 19:22:29.442854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.226 [2024-11-26 19:22:29.506852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.226 [2024-11-26 19:22:29.639593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.484 [2024-11-26 19:22:29.686105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.053 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.053 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:32.053 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:32.053 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:32.312 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.312 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.312 Running I/O for 1 seconds... 00:14:33.522 4510.00 IOPS, 17.62 MiB/s 00:14:33.522 Latency(us) 00:14:33.522 [2024-11-26T19:22:31.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.522 Verification LBA range: start 0x0 length 0x2000 00:14:33.522 nvme0n1 : 1.02 4564.17 17.83 0.00 0.00 27811.00 5183.30 22997.18 00:14:33.522 [2024-11-26T19:22:31.962Z] =================================================================================================================== 00:14:33.522 [2024-11-26T19:22:31.962Z] Total : 4564.17 17.83 0.00 0.00 27811.00 5183.30 22997.18 00:14:33.522 { 00:14:33.522 "results": [ 00:14:33.522 { 00:14:33.522 "job": "nvme0n1", 00:14:33.522 "core_mask": "0x2", 00:14:33.522 "workload": "verify", 00:14:33.522 "status": "finished", 00:14:33.522 "verify_range": { 00:14:33.522 "start": 0, 00:14:33.522 "length": 8192 00:14:33.522 }, 00:14:33.522 "queue_depth": 128, 00:14:33.522 "io_size": 4096, 00:14:33.522 "runtime": 1.016175, 00:14:33.522 "iops": 4564.174477821241, 00:14:33.522 "mibps": 17.828806553989224, 00:14:33.522 "io_failed": 0, 00:14:33.522 "io_timeout": 0, 00:14:33.522 "avg_latency_us": 27811.00021482614, 00:14:33.522 "min_latency_us": 5183.301818181818, 00:14:33.522 "max_latency_us": 22997.17818181818 00:14:33.522 } 00:14:33.522 ], 00:14:33.522 "core_count": 1 00:14:33.522 } 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:33.522 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:33.523 nvmf_trace.0 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72227 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72227 ']' 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72227 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72227 00:14:33.523 killing process with pid 72227 00:14:33.523 Received shutdown signal, test time was about 1.000000 seconds 00:14:33.523 00:14:33.523 Latency(us) 00:14:33.523 [2024-11-26T19:22:31.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.523 [2024-11-26T19:22:31.963Z] =================================================================================================================== 00:14:33.523 [2024-11-26T19:22:31.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72227' 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72227 00:14:33.523 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72227 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.782 rmmod nvme_tcp 00:14:33.782 rmmod nvme_fabrics 00:14:33.782 rmmod nvme_keyring 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72192 ']' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72192 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72192 ']' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72192 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72192 00:14:33.782 killing process with pid 72192 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72192' 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72192 00:14:33.782 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72192 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.041 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:34.042 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.042 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:34.042 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3ZpbYPyFn6 /tmp/tmp.nffhvXC6cN /tmp/tmp.aPdljVs0ge 00:14:34.300 00:14:34.300 real 1m23.203s 00:14:34.300 user 2m14.164s 00:14:34.300 sys 0m26.661s 00:14:34.300 ************************************ 00:14:34.300 END TEST nvmf_tls 00:14:34.300 ************************************ 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 ************************************ 00:14:34.300 START TEST nvmf_fips 00:14:34.300 ************************************ 00:14:34.300 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:34.560 * Looking for test storage... 00:14:34.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.560 --rc genhtml_branch_coverage=1 00:14:34.560 --rc genhtml_function_coverage=1 00:14:34.560 --rc genhtml_legend=1 00:14:34.560 --rc geninfo_all_blocks=1 00:14:34.560 --rc geninfo_unexecuted_blocks=1 00:14:34.560 00:14:34.560 ' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.560 --rc genhtml_branch_coverage=1 00:14:34.560 --rc genhtml_function_coverage=1 00:14:34.560 --rc genhtml_legend=1 00:14:34.560 --rc geninfo_all_blocks=1 00:14:34.560 --rc geninfo_unexecuted_blocks=1 00:14:34.560 00:14:34.560 ' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.560 --rc genhtml_branch_coverage=1 00:14:34.560 --rc genhtml_function_coverage=1 00:14:34.560 --rc genhtml_legend=1 00:14:34.560 --rc geninfo_all_blocks=1 00:14:34.560 --rc geninfo_unexecuted_blocks=1 00:14:34.560 00:14:34.560 ' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.560 --rc genhtml_branch_coverage=1 00:14:34.560 --rc genhtml_function_coverage=1 00:14:34.560 --rc genhtml_legend=1 00:14:34.560 --rc geninfo_all_blocks=1 00:14:34.560 --rc geninfo_unexecuted_blocks=1 00:14:34.560 00:14:34.560 ' 00:14:34.560 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.561 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:34.561 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:34.821 Error setting digest 00:14:34.821 40C242DF607F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:34.821 40C242DF607F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:34.821 Cannot find device "nvmf_init_br" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:34.821 Cannot find device "nvmf_init_br2" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:34.821 Cannot find device "nvmf_tgt_br" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.821 Cannot find device "nvmf_tgt_br2" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:34.821 Cannot find device "nvmf_init_br" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:34.821 Cannot find device "nvmf_init_br2" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:34.821 Cannot find device "nvmf_tgt_br" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:34.821 Cannot find device "nvmf_tgt_br2" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:34.821 Cannot find device "nvmf_br" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:34.821 Cannot find device "nvmf_init_if" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:34.821 Cannot find device "nvmf_init_if2" 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.821 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.080 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:35.081 00:14:35.081 --- 10.0.0.3 ping statistics --- 00:14:35.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.081 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.081 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.081 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:14:35.081 00:14:35.081 --- 10.0.0.4 ping statistics --- 00:14:35.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.081 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:35.081 00:14:35.081 --- 10.0.0.1 ping statistics --- 00:14:35.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.081 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:14:35.081 00:14:35.081 --- 10.0.0.2 ping statistics --- 00:14:35.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.081 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72546 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72546 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72546 ']' 00:14:35.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.081 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.340 [2024-11-26 19:22:33.585447] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:35.340 [2024-11-26 19:22:33.585738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.340 [2024-11-26 19:22:33.740577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.599 [2024-11-26 19:22:33.792229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.599 [2024-11-26 19:22:33.792284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.599 [2024-11-26 19:22:33.792299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.599 [2024-11-26 19:22:33.792311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.599 [2024-11-26 19:22:33.792328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.599 [2024-11-26 19:22:33.792721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.599 [2024-11-26 19:22:33.843824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.599 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.599 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:35.599 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.b4x 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.b4x 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.b4x 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.b4x 00:14:35.600 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.859 [2024-11-26 19:22:34.175526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.859 [2024-11-26 19:22:34.191492] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:35.859 [2024-11-26 19:22:34.191662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:35.859 malloc0 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72574 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72574 /var/tmp/bdevperf.sock 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72574 ']' 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.859 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.118 [2024-11-26 19:22:34.326703] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:36.118 [2024-11-26 19:22:34.326969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72574 ] 00:14:36.118 [2024-11-26 19:22:34.468605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.118 [2024-11-26 19:22:34.515160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.377 [2024-11-26 19:22:34.568253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.377 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.377 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:36.377 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.b4x 00:14:36.636 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:36.895 [2024-11-26 19:22:35.120468] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:36.895 TLSTESTn1 00:14:36.895 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.895 Running I/O for 10 seconds... 00:14:39.211 4771.00 IOPS, 18.64 MiB/s [2024-11-26T19:22:38.588Z] 4837.50 IOPS, 18.90 MiB/s [2024-11-26T19:22:39.525Z] 4869.00 IOPS, 19.02 MiB/s [2024-11-26T19:22:40.462Z] 4890.25 IOPS, 19.10 MiB/s [2024-11-26T19:22:41.400Z] 4889.40 IOPS, 19.10 MiB/s [2024-11-26T19:22:42.339Z] 4898.67 IOPS, 19.14 MiB/s [2024-11-26T19:22:43.719Z] 4904.00 IOPS, 19.16 MiB/s [2024-11-26T19:22:44.654Z] 4903.62 IOPS, 19.15 MiB/s [2024-11-26T19:22:45.591Z] 4910.78 IOPS, 19.18 MiB/s [2024-11-26T19:22:45.591Z] 4916.50 IOPS, 19.21 MiB/s 00:14:47.151 Latency(us) 00:14:47.151 [2024-11-26T19:22:45.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.151 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:47.151 Verification LBA range: start 0x0 length 0x2000 00:14:47.151 TLSTESTn1 : 10.01 4922.16 19.23 0.00 0.00 25959.43 5362.04 21567.30 00:14:47.151 [2024-11-26T19:22:45.591Z] =================================================================================================================== 00:14:47.151 [2024-11-26T19:22:45.591Z] Total : 4922.16 19.23 0.00 0.00 25959.43 5362.04 21567.30 00:14:47.151 { 00:14:47.151 "results": [ 00:14:47.151 { 00:14:47.151 "job": "TLSTESTn1", 00:14:47.151 "core_mask": "0x4", 00:14:47.151 "workload": "verify", 00:14:47.151 "status": "finished", 00:14:47.151 "verify_range": { 00:14:47.151 "start": 0, 00:14:47.151 "length": 8192 00:14:47.151 }, 00:14:47.151 "queue_depth": 128, 00:14:47.151 "io_size": 4096, 00:14:47.151 "runtime": 10.01409, 00:14:47.151 "iops": 4922.1646699799985, 00:14:47.151 "mibps": 19.22720574210937, 00:14:47.151 "io_failed": 0, 00:14:47.151 "io_timeout": 0, 00:14:47.151 "avg_latency_us": 25959.431042583838, 00:14:47.151 "min_latency_us": 5362.036363636364, 00:14:47.151 "max_latency_us": 21567.30181818182 00:14:47.151 } 00:14:47.151 ], 00:14:47.151 "core_count": 1 00:14:47.151 } 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.151 nvmf_trace.0 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72574 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72574 ']' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72574 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72574 00:14:47.151 killing process with pid 72574 00:14:47.151 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.151 00:14:47.151 Latency(us) 00:14:47.151 [2024-11-26T19:22:45.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.151 [2024-11-26T19:22:45.591Z] =================================================================================================================== 00:14:47.151 [2024-11-26T19:22:45.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72574' 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72574 00:14:47.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72574 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.411 rmmod nvme_tcp 00:14:47.411 rmmod nvme_fabrics 00:14:47.411 rmmod nvme_keyring 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72546 ']' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72546 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72546 ']' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72546 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72546 00:14:47.411 killing process with pid 72546 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72546' 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72546 00:14:47.411 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72546 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:47.671 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:47.671 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.b4x 00:14:47.930 00:14:47.930 real 0m13.507s 00:14:47.930 user 0m18.280s 00:14:47.930 sys 0m5.616s 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:47.930 ************************************ 00:14:47.930 END TEST nvmf_fips 00:14:47.930 ************************************ 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.930 ************************************ 00:14:47.930 START TEST nvmf_control_msg_list 00:14:47.930 ************************************ 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:47.930 * Looking for test storage... 00:14:47.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:14:47.930 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:48.190 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.191 --rc genhtml_branch_coverage=1 00:14:48.191 --rc genhtml_function_coverage=1 00:14:48.191 --rc genhtml_legend=1 00:14:48.191 --rc geninfo_all_blocks=1 00:14:48.191 --rc geninfo_unexecuted_blocks=1 00:14:48.191 00:14:48.191 ' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.191 --rc genhtml_branch_coverage=1 00:14:48.191 --rc genhtml_function_coverage=1 00:14:48.191 --rc genhtml_legend=1 00:14:48.191 --rc geninfo_all_blocks=1 00:14:48.191 --rc geninfo_unexecuted_blocks=1 00:14:48.191 00:14:48.191 ' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.191 --rc genhtml_branch_coverage=1 00:14:48.191 --rc genhtml_function_coverage=1 00:14:48.191 --rc genhtml_legend=1 00:14:48.191 --rc geninfo_all_blocks=1 00:14:48.191 --rc geninfo_unexecuted_blocks=1 00:14:48.191 00:14:48.191 ' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.191 --rc genhtml_branch_coverage=1 00:14:48.191 --rc genhtml_function_coverage=1 00:14:48.191 --rc genhtml_legend=1 00:14:48.191 --rc geninfo_all_blocks=1 00:14:48.191 --rc geninfo_unexecuted_blocks=1 00:14:48.191 00:14:48.191 ' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.191 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.192 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.192 Cannot find device "nvmf_init_br" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.192 Cannot find device "nvmf_init_br2" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.192 Cannot find device "nvmf_tgt_br" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.192 Cannot find device "nvmf_tgt_br2" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:48.192 Cannot find device "nvmf_init_br" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:48.192 Cannot find device "nvmf_init_br2" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:48.192 Cannot find device "nvmf_tgt_br" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:48.192 Cannot find device "nvmf_tgt_br2" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.192 Cannot find device "nvmf_br" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.192 Cannot find device "nvmf_init_if" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.192 Cannot find device "nvmf_init_if2" 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.192 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.450 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.450 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:48.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:48.451 00:14:48.451 --- 10.0.0.3 ping statistics --- 00:14:48.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.451 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:48.451 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:48.451 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:48.451 00:14:48.451 --- 10.0.0.4 ping statistics --- 00:14:48.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.451 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:48.451 00:14:48.451 --- 10.0.0.1 ping statistics --- 00:14:48.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.451 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:48.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:48.451 00:14:48.451 --- 10.0.0.2 ping statistics --- 00:14:48.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.451 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72961 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72961 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72961 ']' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.451 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.711 [2024-11-26 19:22:46.932637] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:48.711 [2024-11-26 19:22:46.932720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.711 [2024-11-26 19:22:47.077560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.711 [2024-11-26 19:22:47.128637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.711 [2024-11-26 19:22:47.128697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.711 [2024-11-26 19:22:47.128720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.711 [2024-11-26 19:22:47.128730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.711 [2024-11-26 19:22:47.128740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.711 [2024-11-26 19:22:47.129223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.969 [2024-11-26 19:22:47.185890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 [2024-11-26 19:22:47.290292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 Malloc0 00:14:48.969 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.970 [2024-11-26 19:22:47.329299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72980 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72981 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72982 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72980 00:14:48.970 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:49.228 [2024-11-26 19:22:47.507588] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:49.228 [2024-11-26 19:22:47.527845] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:49.228 [2024-11-26 19:22:47.528363] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.165 Initializing NVMe Controllers 00:14:50.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:50.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:50.165 Initialization complete. Launching workers. 00:14:50.165 ======================================================== 00:14:50.165 Latency(us) 00:14:50.165 Device Information : IOPS MiB/s Average min max 00:14:50.165 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3876.97 15.14 257.62 114.37 642.41 00:14:50.165 ======================================================== 00:14:50.165 Total : 3876.97 15.14 257.62 114.37 642.41 00:14:50.165 00:14:50.165 Initializing NVMe Controllers 00:14:50.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:50.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:50.165 Initialization complete. Launching workers. 00:14:50.165 ======================================================== 00:14:50.165 Latency(us) 00:14:50.165 Device Information : IOPS MiB/s Average min max 00:14:50.165 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3850.00 15.04 259.39 134.19 570.21 00:14:50.165 ======================================================== 00:14:50.165 Total : 3850.00 15.04 259.39 134.19 570.21 00:14:50.165 00:14:50.165 Initializing NVMe Controllers 00:14:50.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:50.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:50.165 Initialization complete. Launching workers. 00:14:50.165 ======================================================== 00:14:50.165 Latency(us) 00:14:50.165 Device Information : IOPS MiB/s Average min max 00:14:50.165 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3864.00 15.09 258.47 157.30 436.72 00:14:50.165 ======================================================== 00:14:50.165 Total : 3864.00 15.09 258.47 157.30 436.72 00:14:50.165 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72981 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72982 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.165 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.424 rmmod nvme_tcp 00:14:50.424 rmmod nvme_fabrics 00:14:50.424 rmmod nvme_keyring 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72961 ']' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72961 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72961 ']' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72961 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72961 00:14:50.424 killing process with pid 72961 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72961' 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72961 00:14:50.424 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72961 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:50.684 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:50.684 00:14:50.684 real 0m2.858s 00:14:50.684 user 0m4.746s 00:14:50.684 sys 0m1.263s 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.684 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.684 ************************************ 00:14:50.684 END TEST nvmf_control_msg_list 00:14:50.684 ************************************ 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.944 ************************************ 00:14:50.944 START TEST nvmf_wait_for_buf 00:14:50.944 ************************************ 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:50.944 * Looking for test storage... 00:14:50.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.944 --rc genhtml_branch_coverage=1 00:14:50.944 --rc genhtml_function_coverage=1 00:14:50.944 --rc genhtml_legend=1 00:14:50.944 --rc geninfo_all_blocks=1 00:14:50.944 --rc geninfo_unexecuted_blocks=1 00:14:50.944 00:14:50.944 ' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.944 --rc genhtml_branch_coverage=1 00:14:50.944 --rc genhtml_function_coverage=1 00:14:50.944 --rc genhtml_legend=1 00:14:50.944 --rc geninfo_all_blocks=1 00:14:50.944 --rc geninfo_unexecuted_blocks=1 00:14:50.944 00:14:50.944 ' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.944 --rc genhtml_branch_coverage=1 00:14:50.944 --rc genhtml_function_coverage=1 00:14:50.944 --rc genhtml_legend=1 00:14:50.944 --rc geninfo_all_blocks=1 00:14:50.944 --rc geninfo_unexecuted_blocks=1 00:14:50.944 00:14:50.944 ' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.944 --rc genhtml_branch_coverage=1 00:14:50.944 --rc genhtml_function_coverage=1 00:14:50.944 --rc genhtml_legend=1 00:14:50.944 --rc geninfo_all_blocks=1 00:14:50.944 --rc geninfo_unexecuted_blocks=1 00:14:50.944 00:14:50.944 ' 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.944 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.945 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:51.204 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:51.205 Cannot find device "nvmf_init_br" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:51.205 Cannot find device "nvmf_init_br2" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:51.205 Cannot find device "nvmf_tgt_br" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.205 Cannot find device "nvmf_tgt_br2" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:51.205 Cannot find device "nvmf_init_br" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:51.205 Cannot find device "nvmf_init_br2" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:51.205 Cannot find device "nvmf_tgt_br" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:51.205 Cannot find device "nvmf_tgt_br2" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:51.205 Cannot find device "nvmf_br" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:51.205 Cannot find device "nvmf_init_if" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:51.205 Cannot find device "nvmf_init_if2" 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.205 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:51.464 00:14:51.464 --- 10.0.0.3 ping statistics --- 00:14:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.464 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.464 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.464 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:14:51.464 00:14:51.464 --- 10.0.0.4 ping statistics --- 00:14:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.464 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:51.464 00:14:51.464 --- 10.0.0.1 ping statistics --- 00:14:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.464 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:51.464 00:14:51.464 --- 10.0.0.2 ping statistics --- 00:14:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.464 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73219 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73219 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73219 ']' 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.464 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.464 [2024-11-26 19:22:49.840494] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:51.464 [2024-11-26 19:22:49.840575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.723 [2024-11-26 19:22:49.995149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.723 [2024-11-26 19:22:50.049697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.723 [2024-11-26 19:22:50.049753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.723 [2024-11-26 19:22:50.049768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.723 [2024-11-26 19:22:50.049778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.723 [2024-11-26 19:22:50.049787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.723 [2024-11-26 19:22:50.050231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.723 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.982 [2024-11-26 19:22:50.190206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.982 Malloc0 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.982 [2024-11-26 19:22:50.257043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.982 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.983 [2024-11-26 19:22:50.285142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.983 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:52.242 [2024-11-26 19:22:50.481045] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:53.619 Initializing NVMe Controllers 00:14:53.619 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:53.619 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:53.619 Initialization complete. Launching workers. 00:14:53.619 ======================================================== 00:14:53.619 Latency(us) 00:14:53.619 Device Information : IOPS MiB/s Average min max 00:14:53.619 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7985.63 6849.07 8981.65 00:14:53.619 ======================================================== 00:14:53.619 Total : 504.00 63.00 7985.63 6849.07 8981.65 00:14:53.619 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.619 rmmod nvme_tcp 00:14:53.619 rmmod nvme_fabrics 00:14:53.619 rmmod nvme_keyring 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73219 ']' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73219 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73219 ']' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73219 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73219 00:14:53.619 killing process with pid 73219 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73219' 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73219 00:14:53.619 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73219 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:53.878 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:53.879 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:54.138 00:14:54.138 real 0m3.211s 00:14:54.138 user 0m2.586s 00:14:54.138 sys 0m0.779s 00:14:54.138 ************************************ 00:14:54.138 END TEST nvmf_wait_for_buf 00:14:54.138 ************************************ 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.138 ************************************ 00:14:54.138 START TEST nvmf_nsid 00:14:54.138 ************************************ 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:54.138 * Looking for test storage... 00:14:54.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.138 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.398 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.399 --rc genhtml_branch_coverage=1 00:14:54.399 --rc genhtml_function_coverage=1 00:14:54.399 --rc genhtml_legend=1 00:14:54.399 --rc geninfo_all_blocks=1 00:14:54.399 --rc geninfo_unexecuted_blocks=1 00:14:54.399 00:14:54.399 ' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.399 --rc genhtml_branch_coverage=1 00:14:54.399 --rc genhtml_function_coverage=1 00:14:54.399 --rc genhtml_legend=1 00:14:54.399 --rc geninfo_all_blocks=1 00:14:54.399 --rc geninfo_unexecuted_blocks=1 00:14:54.399 00:14:54.399 ' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.399 --rc genhtml_branch_coverage=1 00:14:54.399 --rc genhtml_function_coverage=1 00:14:54.399 --rc genhtml_legend=1 00:14:54.399 --rc geninfo_all_blocks=1 00:14:54.399 --rc geninfo_unexecuted_blocks=1 00:14:54.399 00:14:54.399 ' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.399 --rc genhtml_branch_coverage=1 00:14:54.399 --rc genhtml_function_coverage=1 00:14:54.399 --rc genhtml_legend=1 00:14:54.399 --rc geninfo_all_blocks=1 00:14:54.399 --rc geninfo_unexecuted_blocks=1 00:14:54.399 00:14:54.399 ' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.399 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.399 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:54.400 Cannot find device "nvmf_init_br" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:54.400 Cannot find device "nvmf_init_br2" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:54.400 Cannot find device "nvmf_tgt_br" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.400 Cannot find device "nvmf_tgt_br2" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:54.400 Cannot find device "nvmf_init_br" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:54.400 Cannot find device "nvmf_init_br2" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:54.400 Cannot find device "nvmf_tgt_br" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:54.400 Cannot find device "nvmf_tgt_br2" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:54.400 Cannot find device "nvmf_br" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:54.400 Cannot find device "nvmf_init_if" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:54.400 Cannot find device "nvmf_init_if2" 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.400 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.658 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.658 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.658 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:54.658 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:54.659 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:54.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:54.659 00:14:54.659 --- 10.0.0.3 ping statistics --- 00:14:54.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.659 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:54.659 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:54.659 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:54.659 00:14:54.659 --- 10.0.0.4 ping statistics --- 00:14:54.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.659 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:54.659 00:14:54.659 --- 10.0.0.1 ping statistics --- 00:14:54.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.659 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:54.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:14:54.659 00:14:54.659 --- 10.0.0.2 ping statistics --- 00:14:54.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.659 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73477 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73477 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73477 ']' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.659 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:54.918 [2024-11-26 19:22:53.117137] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:54.918 [2024-11-26 19:22:53.117224] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.918 [2024-11-26 19:22:53.264639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.918 [2024-11-26 19:22:53.306356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.918 [2024-11-26 19:22:53.306422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.918 [2024-11-26 19:22:53.306448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.918 [2024-11-26 19:22:53.306456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.918 [2024-11-26 19:22:53.306463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.918 [2024-11-26 19:22:53.306848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.178 [2024-11-26 19:22:53.362186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73500 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cc99b637-05b9-46a7-aeec-6a9ab9d10208 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a0854f59-4bfb-4412-8645-1f40f505c209 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c045f772-124d-4f8f-82f1-24fc670ee6c4 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:55.178 null0 00:14:55.178 null1 00:14:55.178 null2 00:14:55.178 [2024-11-26 19:22:53.519382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.178 [2024-11-26 19:22:53.534274] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:55.178 [2024-11-26 19:22:53.534379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73500 ] 00:14:55.178 [2024-11-26 19:22:53.543482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73500 /var/tmp/tgt2.sock 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73500 ']' 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.178 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:55.464 [2024-11-26 19:22:53.681515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.464 [2024-11-26 19:22:53.737730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.464 [2024-11-26 19:22:53.811887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.736 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.736 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:55.736 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:56.006 [2024-11-26 19:22:54.424933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.006 [2024-11-26 19:22:54.441113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:56.265 nvme0n1 nvme0n2 00:14:56.265 nvme1n1 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:56.265 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cc99b637-05b9-46a7-aeec-6a9ab9d10208 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cc99b63705b946a7aeec6a9ab9d10208 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CC99B63705B946A7AEEC6A9AB9D10208 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CC99B63705B946A7AEEC6A9AB9D10208 == \C\C\9\9\B\6\3\7\0\5\B\9\4\6\A\7\A\E\E\C\6\A\9\A\B\9\D\1\0\2\0\8 ]] 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a0854f59-4bfb-4412-8645-1f40f505c209 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a0854f594bfb441286451f40f505c209 00:14:57.642 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A0854F594BFB441286451F40F505C209 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A0854F594BFB441286451F40F505C209 == \A\0\8\5\4\F\5\9\4\B\F\B\4\4\1\2\8\6\4\5\1\F\4\0\F\5\0\5\C\2\0\9 ]] 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c045f772-124d-4f8f-82f1-24fc670ee6c4 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c045f772124d4f8f82f124fc670ee6c4 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C045F772124D4F8F82F124FC670EE6C4 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C045F772124D4F8F82F124FC670EE6C4 == \C\0\4\5\F\7\7\2\1\2\4\D\4\F\8\F\8\2\F\1\2\4\F\C\6\7\0\E\E\6\C\4 ]] 00:14:57.643 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73500 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73500 ']' 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73500 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73500 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:57.643 killing process with pid 73500 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73500' 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73500 00:14:57.643 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73500 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.212 rmmod nvme_tcp 00:14:58.212 rmmod nvme_fabrics 00:14:58.212 rmmod nvme_keyring 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73477 ']' 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73477 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73477 ']' 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73477 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73477 00:14:58.212 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.213 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.213 killing process with pid 73477 00:14:58.213 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73477' 00:14:58.213 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73477 00:14:58.213 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73477 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:58.472 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.731 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:58.731 00:14:58.731 real 0m4.590s 00:14:58.731 user 0m6.881s 00:14:58.731 sys 0m1.571s 00:14:58.731 ************************************ 00:14:58.731 END TEST nvmf_nsid 00:14:58.731 ************************************ 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:58.731 00:14:58.731 real 4m59.405s 00:14:58.731 user 10m26.174s 00:14:58.731 sys 1m7.066s 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.731 19:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.731 ************************************ 00:14:58.731 END TEST nvmf_target_extra 00:14:58.731 ************************************ 00:14:58.731 19:22:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:58.731 19:22:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.731 19:22:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.731 19:22:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.731 ************************************ 00:14:58.731 START TEST nvmf_host 00:14:58.731 ************************************ 00:14:58.731 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:58.990 * Looking for test storage... 00:14:58.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.990 --rc genhtml_branch_coverage=1 00:14:58.990 --rc genhtml_function_coverage=1 00:14:58.990 --rc genhtml_legend=1 00:14:58.990 --rc geninfo_all_blocks=1 00:14:58.990 --rc geninfo_unexecuted_blocks=1 00:14:58.990 00:14:58.990 ' 00:14:58.990 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.990 --rc genhtml_branch_coverage=1 00:14:58.990 --rc genhtml_function_coverage=1 00:14:58.990 --rc genhtml_legend=1 00:14:58.990 --rc geninfo_all_blocks=1 00:14:58.990 --rc geninfo_unexecuted_blocks=1 00:14:58.990 00:14:58.990 ' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.991 --rc genhtml_branch_coverage=1 00:14:58.991 --rc genhtml_function_coverage=1 00:14:58.991 --rc genhtml_legend=1 00:14:58.991 --rc geninfo_all_blocks=1 00:14:58.991 --rc geninfo_unexecuted_blocks=1 00:14:58.991 00:14:58.991 ' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.991 --rc genhtml_branch_coverage=1 00:14:58.991 --rc genhtml_function_coverage=1 00:14:58.991 --rc genhtml_legend=1 00:14:58.991 --rc geninfo_all_blocks=1 00:14:58.991 --rc geninfo_unexecuted_blocks=1 00:14:58.991 00:14:58.991 ' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.991 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.991 ************************************ 00:14:58.991 START TEST nvmf_identify 00:14:58.991 ************************************ 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:58.991 * Looking for test storage... 00:14:58.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.991 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.251 --rc genhtml_branch_coverage=1 00:14:59.251 --rc genhtml_function_coverage=1 00:14:59.251 --rc genhtml_legend=1 00:14:59.251 --rc geninfo_all_blocks=1 00:14:59.251 --rc geninfo_unexecuted_blocks=1 00:14:59.251 00:14:59.251 ' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.251 --rc genhtml_branch_coverage=1 00:14:59.251 --rc genhtml_function_coverage=1 00:14:59.251 --rc genhtml_legend=1 00:14:59.251 --rc geninfo_all_blocks=1 00:14:59.251 --rc geninfo_unexecuted_blocks=1 00:14:59.251 00:14:59.251 ' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.251 --rc genhtml_branch_coverage=1 00:14:59.251 --rc genhtml_function_coverage=1 00:14:59.251 --rc genhtml_legend=1 00:14:59.251 --rc geninfo_all_blocks=1 00:14:59.251 --rc geninfo_unexecuted_blocks=1 00:14:59.251 00:14:59.251 ' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.251 --rc genhtml_branch_coverage=1 00:14:59.251 --rc genhtml_function_coverage=1 00:14:59.251 --rc genhtml_legend=1 00:14:59.251 --rc geninfo_all_blocks=1 00:14:59.251 --rc geninfo_unexecuted_blocks=1 00:14:59.251 00:14:59.251 ' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.251 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:59.251 Cannot find device "nvmf_init_br" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:59.251 Cannot find device "nvmf_init_br2" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:59.251 Cannot find device "nvmf_tgt_br" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.251 Cannot find device "nvmf_tgt_br2" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:59.251 Cannot find device "nvmf_init_br" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:59.251 Cannot find device "nvmf_init_br2" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:59.251 Cannot find device "nvmf_tgt_br" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:59.251 Cannot find device "nvmf_tgt_br2" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:59.251 Cannot find device "nvmf_br" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:59.251 Cannot find device "nvmf_init_if" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:59.251 Cannot find device "nvmf_init_if2" 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.251 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:59.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:59.511 00:14:59.511 --- 10.0.0.3 ping statistics --- 00:14:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.511 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:59.511 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:59.511 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:59.511 00:14:59.511 --- 10.0.0.4 ping statistics --- 00:14:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.511 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:59.511 00:14:59.511 --- 10.0.0.1 ping statistics --- 00:14:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.511 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:59.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:59.511 00:14:59.511 --- 10.0.0.2 ping statistics --- 00:14:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.511 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73853 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73853 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73853 ']' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 19:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.771 [2024-11-26 19:22:57.983443] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:14:59.771 [2024-11-26 19:22:57.983526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.771 [2024-11-26 19:22:58.127824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.771 [2024-11-26 19:22:58.176475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.771 [2024-11-26 19:22:58.176703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.771 [2024-11-26 19:22:58.176802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.771 [2024-11-26 19:22:58.176891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.771 [2024-11-26 19:22:58.176967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.771 [2024-11-26 19:22:58.178135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.771 [2024-11-26 19:22:58.178260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.771 [2024-11-26 19:22:58.178397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.771 [2024-11-26 19:22:58.178397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.030 [2024-11-26 19:22:58.232314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 [2024-11-26 19:22:58.302487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 Malloc0 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 [2024-11-26 19:22:58.413228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.030 [ 00:15:00.030 { 00:15:00.030 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:00.030 "subtype": "Discovery", 00:15:00.030 "listen_addresses": [ 00:15:00.030 { 00:15:00.030 "trtype": "TCP", 00:15:00.030 "adrfam": "IPv4", 00:15:00.030 "traddr": "10.0.0.3", 00:15:00.030 "trsvcid": "4420" 00:15:00.030 } 00:15:00.030 ], 00:15:00.030 "allow_any_host": true, 00:15:00.030 "hosts": [] 00:15:00.030 }, 00:15:00.030 { 00:15:00.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.030 "subtype": "NVMe", 00:15:00.030 "listen_addresses": [ 00:15:00.030 { 00:15:00.030 "trtype": "TCP", 00:15:00.030 "adrfam": "IPv4", 00:15:00.030 "traddr": "10.0.0.3", 00:15:00.030 "trsvcid": "4420" 00:15:00.030 } 00:15:00.030 ], 00:15:00.030 "allow_any_host": true, 00:15:00.030 "hosts": [], 00:15:00.030 "serial_number": "SPDK00000000000001", 00:15:00.030 "model_number": "SPDK bdev Controller", 00:15:00.030 "max_namespaces": 32, 00:15:00.030 "min_cntlid": 1, 00:15:00.030 "max_cntlid": 65519, 00:15:00.030 "namespaces": [ 00:15:00.030 { 00:15:00.030 "nsid": 1, 00:15:00.030 "bdev_name": "Malloc0", 00:15:00.030 "name": "Malloc0", 00:15:00.030 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:00.030 "eui64": "ABCDEF0123456789", 00:15:00.030 "uuid": "1c4849cc-53b2-434e-93d7-17e44f616cef" 00:15:00.030 } 00:15:00.030 ] 00:15:00.030 } 00:15:00.030 ] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.030 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:00.030 [2024-11-26 19:22:58.466840] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:00.030 [2024-11-26 19:22:58.466898] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:15:00.291 [2024-11-26 19:22:58.620809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:00.291 [2024-11-26 19:22:58.620884] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:00.291 [2024-11-26 19:22:58.620890] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:00.291 [2024-11-26 19:22:58.620919] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:00.291 [2024-11-26 19:22:58.620961] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:00.291 [2024-11-26 19:22:58.621292] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:00.291 [2024-11-26 19:22:58.621384] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20bb750 0 00:15:00.291 [2024-11-26 19:22:58.631981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:00.291 [2024-11-26 19:22:58.632017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:00.291 [2024-11-26 19:22:58.632023] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:00.291 [2024-11-26 19:22:58.632026] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:00.291 [2024-11-26 19:22:58.632062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.291 [2024-11-26 19:22:58.632069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.291 [2024-11-26 19:22:58.632073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.291 [2024-11-26 19:22:58.632085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:00.291 [2024-11-26 19:22:58.632132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.291 [2024-11-26 19:22:58.639948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.291 [2024-11-26 19:22:58.639969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.291 [2024-11-26 19:22:58.639990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.291 [2024-11-26 19:22:58.639996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.291 [2024-11-26 19:22:58.640025] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:00.292 [2024-11-26 19:22:58.640033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640573] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:00.292 [2024-11-26 19:22:58.640578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640697] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:00.292 [2024-11-26 19:22:58.640704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.292 [2024-11-26 19:22:58.640825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.640859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.640898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.640905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.640908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.640917] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.292 [2024-11-26 19:22:58.640922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:00.292 [2024-11-26 19:22:58.640930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:00.292 [2024-11-26 19:22:58.640940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.292 [2024-11-26 19:22:58.640951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.640955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.292 [2024-11-26 19:22:58.640963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.292 [2024-11-26 19:22:58.641023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.292 [2024-11-26 19:22:58.641114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.292 [2024-11-26 19:22:58.641122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.292 [2024-11-26 19:22:58.641126] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641130] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bb750): datao=0, datal=4096, cccid=0 00:15:00.292 [2024-11-26 19:22:58.641135] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211f740) on tqpair(0x20bb750): expected_datao=0, payload_size=4096 00:15:00.292 [2024-11-26 19:22:58.641140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641149] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641154] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.292 [2024-11-26 19:22:58.641169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.292 [2024-11-26 19:22:58.641172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.292 [2024-11-26 19:22:58.641186] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:00.292 [2024-11-26 19:22:58.641191] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:00.292 [2024-11-26 19:22:58.641196] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:00.292 [2024-11-26 19:22:58.641207] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:00.292 [2024-11-26 19:22:58.641213] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:00.292 [2024-11-26 19:22:58.641218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:00.292 [2024-11-26 19:22:58.641227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.292 [2024-11-26 19:22:58.641236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.292 [2024-11-26 19:22:58.641244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:00.293 [2024-11-26 19:22:58.641274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.293 [2024-11-26 19:22:58.641329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.641337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.641341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.641353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.293 [2024-11-26 19:22:58.641390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.293 [2024-11-26 19:22:58.641409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.293 [2024-11-26 19:22:58.641428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.293 [2024-11-26 19:22:58.641446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.293 [2024-11-26 19:22:58.641455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.293 [2024-11-26 19:22:58.641462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.293 [2024-11-26 19:22:58.641501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f740, cid 0, qid 0 00:15:00.293 [2024-11-26 19:22:58.641509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211f8c0, cid 1, qid 0 00:15:00.293 [2024-11-26 19:22:58.641513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fa40, cid 2, qid 0 00:15:00.293 [2024-11-26 19:22:58.641518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.293 [2024-11-26 19:22:58.641523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fd40, cid 4, qid 0 00:15:00.293 [2024-11-26 19:22:58.641603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.641610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.641614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fd40) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.641624] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:00.293 [2024-11-26 19:22:58.641629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:00.293 [2024-11-26 19:22:58.641641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.293 [2024-11-26 19:22:58.641672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fd40, cid 4, qid 0 00:15:00.293 [2024-11-26 19:22:58.641726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.293 [2024-11-26 19:22:58.641733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.293 [2024-11-26 19:22:58.641736] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641740] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bb750): datao=0, datal=4096, cccid=4 00:15:00.293 [2024-11-26 19:22:58.641745] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211fd40) on tqpair(0x20bb750): expected_datao=0, payload_size=4096 00:15:00.293 [2024-11-26 19:22:58.641749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641756] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641760] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.641775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.641778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fd40) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.641796] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:00.293 [2024-11-26 19:22:58.641821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.293 [2024-11-26 19:22:58.641842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.641850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.641856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.293 [2024-11-26 19:22:58.641881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fd40, cid 4, qid 0 00:15:00.293 [2024-11-26 19:22:58.641889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fec0, cid 5, qid 0 00:15:00.293 [2024-11-26 19:22:58.642005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.293 [2024-11-26 19:22:58.642014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.293 [2024-11-26 19:22:58.642018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bb750): datao=0, datal=1024, cccid=4 00:15:00.293 [2024-11-26 19:22:58.642026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211fd40) on tqpair(0x20bb750): expected_datao=0, payload_size=1024 00:15:00.293 [2024-11-26 19:22:58.642031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.642052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.642056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fec0) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.642079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.642087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.642091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fd40) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.642107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.642120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.293 [2024-11-26 19:22:58.642145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fd40, cid 4, qid 0 00:15:00.293 [2024-11-26 19:22:58.642209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.293 [2024-11-26 19:22:58.642216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.293 [2024-11-26 19:22:58.642220] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642223] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bb750): datao=0, datal=3072, cccid=4 00:15:00.293 [2024-11-26 19:22:58.642228] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211fd40) on tqpair(0x20bb750): expected_datao=0, payload_size=3072 00:15:00.293 [2024-11-26 19:22:58.642232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.293 [2024-11-26 19:22:58.642257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.293 [2024-11-26 19:22:58.642261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fd40) on tqpair=0x20bb750 00:15:00.293 [2024-11-26 19:22:58.642275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.293 [2024-11-26 19:22:58.642279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bb750) 00:15:00.293 [2024-11-26 19:22:58.642286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.293 [2024-11-26 19:22:58.642310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fd40, cid 4, qid 0 00:15:00.293 ===================================================== 00:15:00.293 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:00.293 ===================================================== 00:15:00.293 Controller Capabilities/Features 00:15:00.294 ================================ 00:15:00.294 Vendor ID: 0000 00:15:00.294 Subsystem Vendor ID: 0000 00:15:00.294 Serial Number: .................... 00:15:00.294 Model Number: ........................................ 00:15:00.294 Firmware Version: 25.01 00:15:00.294 Recommended Arb Burst: 0 00:15:00.294 IEEE OUI Identifier: 00 00 00 00:15:00.294 Multi-path I/O 00:15:00.294 May have multiple subsystem ports: No 00:15:00.294 May have multiple controllers: No 00:15:00.294 Associated with SR-IOV VF: No 00:15:00.294 Max Data Transfer Size: 131072 00:15:00.294 Max Number of Namespaces: 0 00:15:00.294 Max Number of I/O Queues: 1024 00:15:00.294 NVMe Specification Version (VS): 1.3 00:15:00.294 NVMe Specification Version (Identify): 1.3 00:15:00.294 Maximum Queue Entries: 128 00:15:00.294 Contiguous Queues Required: Yes 00:15:00.294 Arbitration Mechanisms Supported 00:15:00.294 Weighted Round Robin: Not Supported 00:15:00.294 Vendor Specific: Not Supported 00:15:00.294 Reset Timeout: 15000 ms 00:15:00.294 Doorbell Stride: 4 bytes 00:15:00.294 NVM Subsystem Reset: Not Supported 00:15:00.294 Command Sets Supported 00:15:00.294 NVM Command Set: Supported 00:15:00.294 Boot Partition: Not Supported 00:15:00.294 Memory Page Size Minimum: 4096 bytes 00:15:00.294 Memory Page Size Maximum: 4096 bytes 00:15:00.294 Persistent Memory Region: Not Supported 00:15:00.294 Optional Asynchronous Events Supported 00:15:00.294 Namespace Attribute Notices: Not Supported 00:15:00.294 Firmware Activation Notices: Not Supported 00:15:00.294 ANA Change Notices: Not Supported 00:15:00.294 PLE Aggregate Log Change Notices: Not Supported 00:15:00.294 LBA Status Info Alert Notices: Not Supported 00:15:00.294 EGE Aggregate Log Change Notices: Not Supported 00:15:00.294 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.294 Zone Descriptor Change Notices: Not Supported 00:15:00.294 Discovery Log Change Notices: Supported 00:15:00.294 Controller Attributes 00:15:00.294 128-bit Host Identifier: Not Supported 00:15:00.294 Non-Operational Permissive Mode: Not Supported 00:15:00.294 NVM Sets: Not Supported 00:15:00.294 Read Recovery Levels: Not Supported 00:15:00.294 Endurance Groups: Not Supported 00:15:00.294 Predictable Latency Mode: Not Supported 00:15:00.294 Traffic Based Keep ALive: Not Supported 00:15:00.294 Namespace Granularity: Not Supported 00:15:00.294 SQ Associations: Not Supported 00:15:00.294 UUID List: Not Supported 00:15:00.294 Multi-Domain Subsystem: Not Supported 00:15:00.294 Fixed Capacity Management: Not Supported 00:15:00.294 Variable Capacity Management: Not Supported 00:15:00.294 Delete Endurance Group: Not Supported 00:15:00.294 Delete NVM Set: Not Supported 00:15:00.294 Extended LBA Formats Supported: Not Supported 00:15:00.294 Flexible Data Placement Supported: Not Supported 00:15:00.294 00:15:00.294 Controller Memory Buffer Support 00:15:00.294 ================================ 00:15:00.294 Supported: No 00:15:00.294 00:15:00.294 Persistent Memory Region Support 00:15:00.294 ================================ 00:15:00.294 Supported: No 00:15:00.294 00:15:00.294 Admin Command Set Attributes 00:15:00.294 ============================ 00:15:00.294 Security Send/Receive: Not Supported 00:15:00.294 Format NVM: Not Supported 00:15:00.294 Firmware Activate/Download: Not Supported 00:15:00.294 Namespace Management: Not Supported 00:15:00.294 Device Self-Test: Not Supported 00:15:00.294 Directives: Not Supported 00:15:00.294 NVMe-MI: Not Supported 00:15:00.294 Virtualization Management: Not Supported 00:15:00.294 Doorbell Buffer Config: Not Supported 00:15:00.294 Get LBA Status Capability: Not Supported 00:15:00.294 Command & Feature Lockdown Capability: Not Supported 00:15:00.294 Abort Command Limit: 1 00:15:00.294 Async Event Request Limit: 4 00:15:00.294 Number of Firmware Slots: N/A 00:15:00.294 Firmware Slot 1 Read-Only: N/A 00:15:00.294 Firm[2024-11-26 19:22:58.642371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.294 [2024-11-26 19:22:58.642378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.294 [2024-11-26 19:22:58.642381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.294 [2024-11-26 19:22:58.642385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bb750): datao=0, datal=8, cccid=4 00:15:00.294 [2024-11-26 19:22:58.642390] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211fd40) on tqpair(0x20bb750): expected_datao=0, payload_size=8 00:15:00.294 [2024-11-26 19:22:58.642394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.294 [2024-11-26 19:22:58.642400] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.294 [2024-11-26 19:22:58.642404] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.294 [2024-11-26 19:22:58.642419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.294 [2024-11-26 19:22:58.642426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.294 [2024-11-26 19:22:58.642430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.294 [2024-11-26 19:22:58.642434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fd40) on tqpair=0x20bb750 00:15:00.294 ware Activation Without Reset: N/A 00:15:00.294 Multiple Update Detection Support: N/A 00:15:00.294 Firmware Update Granularity: No Information Provided 00:15:00.294 Per-Namespace SMART Log: No 00:15:00.294 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.294 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:00.294 Command Effects Log Page: Not Supported 00:15:00.294 Get Log Page Extended Data: Supported 00:15:00.294 Telemetry Log Pages: Not Supported 00:15:00.294 Persistent Event Log Pages: Not Supported 00:15:00.294 Supported Log Pages Log Page: May Support 00:15:00.294 Commands Supported & Effects Log Page: Not Supported 00:15:00.294 Feature Identifiers & Effects Log Page:May Support 00:15:00.294 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.294 Data Area 4 for Telemetry Log: Not Supported 00:15:00.294 Error Log Page Entries Supported: 128 00:15:00.294 Keep Alive: Not Supported 00:15:00.294 00:15:00.294 NVM Command Set Attributes 00:15:00.294 ========================== 00:15:00.294 Submission Queue Entry Size 00:15:00.294 Max: 1 00:15:00.294 Min: 1 00:15:00.294 Completion Queue Entry Size 00:15:00.294 Max: 1 00:15:00.294 Min: 1 00:15:00.294 Number of Namespaces: 0 00:15:00.294 Compare Command: Not Supported 00:15:00.294 Write Uncorrectable Command: Not Supported 00:15:00.294 Dataset Management Command: Not Supported 00:15:00.294 Write Zeroes Command: Not Supported 00:15:00.294 Set Features Save Field: Not Supported 00:15:00.294 Reservations: Not Supported 00:15:00.294 Timestamp: Not Supported 00:15:00.294 Copy: Not Supported 00:15:00.294 Volatile Write Cache: Not Present 00:15:00.294 Atomic Write Unit (Normal): 1 00:15:00.294 Atomic Write Unit (PFail): 1 00:15:00.294 Atomic Compare & Write Unit: 1 00:15:00.294 Fused Compare & Write: Supported 00:15:00.294 Scatter-Gather List 00:15:00.294 SGL Command Set: Supported 00:15:00.294 SGL Keyed: Supported 00:15:00.294 SGL Bit Bucket Descriptor: Not Supported 00:15:00.294 SGL Metadata Pointer: Not Supported 00:15:00.294 Oversized SGL: Not Supported 00:15:00.294 SGL Metadata Address: Not Supported 00:15:00.294 SGL Offset: Supported 00:15:00.294 Transport SGL Data Block: Not Supported 00:15:00.294 Replay Protected Memory Block: Not Supported 00:15:00.294 00:15:00.294 Firmware Slot Information 00:15:00.294 ========================= 00:15:00.294 Active slot: 0 00:15:00.294 00:15:00.294 00:15:00.294 Error Log 00:15:00.294 ========= 00:15:00.294 00:15:00.294 Active Namespaces 00:15:00.294 ================= 00:15:00.294 Discovery Log Page 00:15:00.294 ================== 00:15:00.294 Generation Counter: 2 00:15:00.294 Number of Records: 2 00:15:00.294 Record Format: 0 00:15:00.294 00:15:00.294 Discovery Log Entry 0 00:15:00.294 ---------------------- 00:15:00.294 Transport Type: 3 (TCP) 00:15:00.294 Address Family: 1 (IPv4) 00:15:00.294 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:00.294 Entry Flags: 00:15:00.294 Duplicate Returned Information: 1 00:15:00.294 Explicit Persistent Connection Support for Discovery: 1 00:15:00.294 Transport Requirements: 00:15:00.294 Secure Channel: Not Required 00:15:00.294 Port ID: 0 (0x0000) 00:15:00.294 Controller ID: 65535 (0xffff) 00:15:00.294 Admin Max SQ Size: 128 00:15:00.294 Transport Service Identifier: 4420 00:15:00.294 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:00.294 Transport Address: 10.0.0.3 00:15:00.294 Discovery Log Entry 1 00:15:00.294 ---------------------- 00:15:00.294 Transport Type: 3 (TCP) 00:15:00.294 Address Family: 1 (IPv4) 00:15:00.294 Subsystem Type: 2 (NVM Subsystem) 00:15:00.294 Entry Flags: 00:15:00.294 Duplicate Returned Information: 0 00:15:00.294 Explicit Persistent Connection Support for Discovery: 0 00:15:00.294 Transport Requirements: 00:15:00.295 Secure Channel: Not Required 00:15:00.295 Port ID: 0 (0x0000) 00:15:00.295 Controller ID: 65535 (0xffff) 00:15:00.295 Admin Max SQ Size: 128 00:15:00.295 Transport Service Identifier: 4420 00:15:00.295 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:00.295 Transport Address: 10.0.0.3 [2024-11-26 19:22:58.642523] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:00.295 [2024-11-26 19:22:58.642537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f740) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.295 [2024-11-26 19:22:58.642550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211f8c0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.295 [2024-11-26 19:22:58.642560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fa40) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.295 [2024-11-26 19:22:58.642570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.295 [2024-11-26 19:22:58.642587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.642604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.642627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.642673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.642681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.642685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.642713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.642735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.642790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.642797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.642801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642810] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:00.295 [2024-11-26 19:22:58.642815] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:00.295 [2024-11-26 19:22:58.642825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.642841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.642859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.642928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.642937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.642941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.642957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.642966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.642974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.642995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.295 [2024-11-26 19:22:58.643589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.295 [2024-11-26 19:22:58.643598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.295 [2024-11-26 19:22:58.643605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.295 [2024-11-26 19:22:58.643623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.295 [2024-11-26 19:22:58.643691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.295 [2024-11-26 19:22:58.643700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.295 [2024-11-26 19:22:58.643704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.296 [2024-11-26 19:22:58.643720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.296 [2024-11-26 19:22:58.643737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.296 [2024-11-26 19:22:58.643757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.296 [2024-11-26 19:22:58.643801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.296 [2024-11-26 19:22:58.643808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.296 [2024-11-26 19:22:58.643812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.296 [2024-11-26 19:22:58.643827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.643837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.296 [2024-11-26 19:22:58.643844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.296 [2024-11-26 19:22:58.643863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.296 [2024-11-26 19:22:58.647955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.296 [2024-11-26 19:22:58.647991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.296 [2024-11-26 19:22:58.648011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.648015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.296 [2024-11-26 19:22:58.648030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.648036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.648040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bb750) 00:15:00.296 [2024-11-26 19:22:58.648049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.296 [2024-11-26 19:22:58.648073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211fbc0, cid 3, qid 0 00:15:00.296 [2024-11-26 19:22:58.648175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.296 [2024-11-26 19:22:58.648182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.296 [2024-11-26 19:22:58.648192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.296 [2024-11-26 19:22:58.648196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x211fbc0) on tqpair=0x20bb750 00:15:00.296 [2024-11-26 19:22:58.648204] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:00.296 00:15:00.296 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:00.296 [2024-11-26 19:22:58.687596] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:00.296 [2024-11-26 19:22:58.687650] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73882 ] 00:15:00.560 [2024-11-26 19:22:58.841628] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:00.560 [2024-11-26 19:22:58.841694] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:00.560 [2024-11-26 19:22:58.841701] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:00.560 [2024-11-26 19:22:58.841713] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:00.560 [2024-11-26 19:22:58.841722] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:00.560 [2024-11-26 19:22:58.842002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:00.560 [2024-11-26 19:22:58.842073] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20c6750 0 00:15:00.560 [2024-11-26 19:22:58.849006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:00.560 [2024-11-26 19:22:58.849046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:00.560 [2024-11-26 19:22:58.849068] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:00.560 [2024-11-26 19:22:58.849071] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:00.560 [2024-11-26 19:22:58.849102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.849109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.849113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.560 [2024-11-26 19:22:58.849123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:00.560 [2024-11-26 19:22:58.849154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.560 [2024-11-26 19:22:58.857009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.560 [2024-11-26 19:22:58.857033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.560 [2024-11-26 19:22:58.857054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.560 [2024-11-26 19:22:58.857071] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:00.560 [2024-11-26 19:22:58.857079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:00.560 [2024-11-26 19:22:58.857086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:00.560 [2024-11-26 19:22:58.857103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.560 [2024-11-26 19:22:58.857122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.560 [2024-11-26 19:22:58.857150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.560 [2024-11-26 19:22:58.857207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.560 [2024-11-26 19:22:58.857214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.560 [2024-11-26 19:22:58.857218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.560 [2024-11-26 19:22:58.857228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:00.560 [2024-11-26 19:22:58.857236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:00.560 [2024-11-26 19:22:58.857244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.560 [2024-11-26 19:22:58.857261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.560 [2024-11-26 19:22:58.857296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.560 [2024-11-26 19:22:58.857340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.560 [2024-11-26 19:22:58.857348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.560 [2024-11-26 19:22:58.857352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.560 [2024-11-26 19:22:58.857362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:00.560 [2024-11-26 19:22:58.857371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.560 [2024-11-26 19:22:58.857379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.560 [2024-11-26 19:22:58.857395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.560 [2024-11-26 19:22:58.857414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.560 [2024-11-26 19:22:58.857463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.560 [2024-11-26 19:22:58.857470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.560 [2024-11-26 19:22:58.857474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.560 [2024-11-26 19:22:58.857484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.560 [2024-11-26 19:22:58.857495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.560 [2024-11-26 19:22:58.857512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.560 [2024-11-26 19:22:58.857530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.560 [2024-11-26 19:22:58.857572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.560 [2024-11-26 19:22:58.857579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.560 [2024-11-26 19:22:58.857583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.560 [2024-11-26 19:22:58.857587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.560 [2024-11-26 19:22:58.857592] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:00.560 [2024-11-26 19:22:58.857597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:00.560 [2024-11-26 19:22:58.857606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.560 [2024-11-26 19:22:58.857717] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:00.560 [2024-11-26 19:22:58.857724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.560 [2024-11-26 19:22:58.857733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.857750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.561 [2024-11-26 19:22:58.857770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.561 [2024-11-26 19:22:58.857814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.857821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.857825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.857835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.561 [2024-11-26 19:22:58.857845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.857862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.561 [2024-11-26 19:22:58.857880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.561 [2024-11-26 19:22:58.857939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.857948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.857952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.857956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.857961] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.561 [2024-11-26 19:22:58.857967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.857976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:00.561 [2024-11-26 19:22:58.857987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.857998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.561 [2024-11-26 19:22:58.858032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.561 [2024-11-26 19:22:58.858128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.561 [2024-11-26 19:22:58.858136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.561 [2024-11-26 19:22:58.858140] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858144] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=4096, cccid=0 00:15:00.561 [2024-11-26 19:22:58.858149] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212a740) on tqpair(0x20c6750): expected_datao=0, payload_size=4096 00:15:00.561 [2024-11-26 19:22:58.858154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.858180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.858184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.858197] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:00.561 [2024-11-26 19:22:58.858203] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:00.561 [2024-11-26 19:22:58.858208] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:00.561 [2024-11-26 19:22:58.858217] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:00.561 [2024-11-26 19:22:58.858222] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:00.561 [2024-11-26 19:22:58.858227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:00.561 [2024-11-26 19:22:58.858283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.561 [2024-11-26 19:22:58.858333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.858340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.858344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.858356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.561 [2024-11-26 19:22:58.858378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.561 [2024-11-26 19:22:58.858399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.561 [2024-11-26 19:22:58.858420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.561 [2024-11-26 19:22:58.858440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.561 [2024-11-26 19:22:58.858493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a740, cid 0, qid 0 00:15:00.561 [2024-11-26 19:22:58.858501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212a8c0, cid 1, qid 0 00:15:00.561 [2024-11-26 19:22:58.858506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aa40, cid 2, qid 0 00:15:00.561 [2024-11-26 19:22:58.858511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.561 [2024-11-26 19:22:58.858516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.561 [2024-11-26 19:22:58.858600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.858608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.858612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.858621] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:00.561 [2024-11-26 19:22:58.858627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.561 [2024-11-26 19:22:58.858650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.561 [2024-11-26 19:22:58.858666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:00.561 [2024-11-26 19:22:58.858685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.561 [2024-11-26 19:22:58.858736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.561 [2024-11-26 19:22:58.858744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.561 [2024-11-26 19:22:58.858748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.561 [2024-11-26 19:22:58.858752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.561 [2024-11-26 19:22:58.858819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.858832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.858841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.858846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.858854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.858874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.562 [2024-11-26 19:22:58.858953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.562 [2024-11-26 19:22:58.858962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.562 [2024-11-26 19:22:58.858966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.858970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=4096, cccid=4 00:15:00.562 [2024-11-26 19:22:58.858975] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212ad40) on tqpair(0x20c6750): expected_datao=0, payload_size=4096 00:15:00.562 [2024-11-26 19:22:58.858980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.858987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.858991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859025] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:00.562 [2024-11-26 19:22:58.859038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.859094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.562 [2024-11-26 19:22:58.859207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.562 [2024-11-26 19:22:58.859216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.562 [2024-11-26 19:22:58.859220] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859224] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=4096, cccid=4 00:15:00.562 [2024-11-26 19:22:58.859230] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212ad40) on tqpair(0x20c6750): expected_datao=0, payload_size=4096 00:15:00.562 [2024-11-26 19:22:58.859234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859242] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859246] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.859353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.562 [2024-11-26 19:22:58.859418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.562 [2024-11-26 19:22:58.859425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.562 [2024-11-26 19:22:58.859429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859433] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=4096, cccid=4 00:15:00.562 [2024-11-26 19:22:58.859438] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212ad40) on tqpair(0x20c6750): expected_datao=0, payload_size=4096 00:15:00.562 [2024-11-26 19:22:58.859443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859450] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859454] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859531] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.562 [2024-11-26 19:22:58.859536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:00.562 [2024-11-26 19:22:58.859542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:00.562 [2024-11-26 19:22:58.859557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.859578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.562 [2024-11-26 19:22:58.859618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.562 [2024-11-26 19:22:58.859626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aec0, cid 5, qid 0 00:15:00.562 [2024-11-26 19:22:58.859700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aec0) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.859782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aec0, cid 5, qid 0 00:15:00.562 [2024-11-26 19:22:58.859830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aec0) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.562 [2024-11-26 19:22:58.859886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aec0, cid 5, qid 0 00:15:00.562 [2024-11-26 19:22:58.859955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.562 [2024-11-26 19:22:58.859964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.562 [2024-11-26 19:22:58.859968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aec0) on tqpair=0x20c6750 00:15:00.562 [2024-11-26 19:22:58.859984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.562 [2024-11-26 19:22:58.859989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c6750) 00:15:00.562 [2024-11-26 19:22:58.859997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.563 [2024-11-26 19:22:58.860017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aec0, cid 5, qid 0 00:15:00.563 [2024-11-26 19:22:58.860064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.563 [2024-11-26 19:22:58.860071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.563 [2024-11-26 19:22:58.860075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aec0) on tqpair=0x20c6750 00:15:00.563 [2024-11-26 19:22:58.860099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c6750) 00:15:00.563 [2024-11-26 19:22:58.860113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.563 [2024-11-26 19:22:58.860122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c6750) 00:15:00.563 [2024-11-26 19:22:58.860133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.563 [2024-11-26 19:22:58.860141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20c6750) 00:15:00.563 [2024-11-26 19:22:58.860152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.563 [2024-11-26 19:22:58.860160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20c6750) 00:15:00.563 [2024-11-26 19:22:58.860171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.563 [2024-11-26 19:22:58.860193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212aec0, cid 5, qid 0 00:15:00.563 [2024-11-26 19:22:58.860200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212ad40, cid 4, qid 0 00:15:00.563 [2024-11-26 19:22:58.860205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212b040, cid 6, qid 0 00:15:00.563 [2024-11-26 19:22:58.860210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212b1c0, cid 7, qid 0 00:15:00.563 [2024-11-26 19:22:58.860357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.563 [2024-11-26 19:22:58.860374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.563 [2024-11-26 19:22:58.860379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860383] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=8192, cccid=5 00:15:00.563 [2024-11-26 19:22:58.860388] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212aec0) on tqpair(0x20c6750): expected_datao=0, payload_size=8192 00:15:00.563 [2024-11-26 19:22:58.860393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860411] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860416] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.563 [2024-11-26 19:22:58.860428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.563 [2024-11-26 19:22:58.860432] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860436] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=512, cccid=4 00:15:00.563 [2024-11-26 19:22:58.860441] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212ad40) on tqpair(0x20c6750): expected_datao=0, payload_size=512 00:15:00.563 [2024-11-26 19:22:58.860446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860452] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.563 [2024-11-26 19:22:58.860468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.563 [2024-11-26 19:22:58.860472] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860476] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=512, cccid=6 00:15:00.563 [2024-11-26 19:22:58.860481] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212b040) on tqpair(0x20c6750): expected_datao=0, payload_size=512 00:15:00.563 [2024-11-26 19:22:58.860485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860492] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860495] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:00.563 [2024-11-26 19:22:58.860507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:00.563 [2024-11-26 19:22:58.860511] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860515] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c6750): datao=0, datal=4096, cccid=7 00:15:00.563 [2024-11-26 19:22:58.860519] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x212b1c0) on tqpair(0x20c6750): expected_datao=0, payload_size=4096 00:15:00.563 [2024-11-26 19:22:58.860524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860530] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860534] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.563 [2024-11-26 19:22:58.860546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.563 [2024-11-26 19:22:58.860550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aec0) on tqpair=0x20c6750 00:15:00.563 [2024-11-26 19:22:58.860570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.563 [2024-11-26 19:22:58.860577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.563 [2024-11-26 19:22:58.860581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212ad40) on tqpair=0x20c6750 00:15:00.563 [2024-11-26 19:22:58.860598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.563 [2024-11-26 19:22:58.860604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.563 [2024-11-26 19:22:58.860608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.563 [2024-11-26 19:22:58.860612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212b040) on tqpair=0x20c6750 00:15:00.563 [2024-11-26 19:22:58.860620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.563 [2024-11-26 19:22:58.860626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.563 ===================================================== 00:15:00.563 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.563 ===================================================== 00:15:00.563 Controller Capabilities/Features 00:15:00.563 ================================ 00:15:00.563 Vendor ID: 8086 00:15:00.563 Subsystem Vendor ID: 8086 00:15:00.563 Serial Number: SPDK00000000000001 00:15:00.563 Model Number: SPDK bdev Controller 00:15:00.563 Firmware Version: 25.01 00:15:00.563 Recommended Arb Burst: 6 00:15:00.563 IEEE OUI Identifier: e4 d2 5c 00:15:00.563 Multi-path I/O 00:15:00.563 May have multiple subsystem ports: Yes 00:15:00.563 May have multiple controllers: Yes 00:15:00.563 Associated with SR-IOV VF: No 00:15:00.563 Max Data Transfer Size: 131072 00:15:00.563 Max Number of Namespaces: 32 00:15:00.563 Max Number of I/O Queues: 127 00:15:00.563 NVMe Specification Version (VS): 1.3 00:15:00.563 NVMe Specification Version (Identify): 1.3 00:15:00.563 Maximum Queue Entries: 128 00:15:00.563 Contiguous Queues Required: Yes 00:15:00.563 Arbitration Mechanisms Supported 00:15:00.563 Weighted Round Robin: Not Supported 00:15:00.563 Vendor Specific: Not Supported 00:15:00.563 Reset Timeout: 15000 ms 00:15:00.563 Doorbell Stride: 4 bytes 00:15:00.563 NVM Subsystem Reset: Not Supported 00:15:00.563 Command Sets Supported 00:15:00.563 NVM Command Set: Supported 00:15:00.563 Boot Partition: Not Supported 00:15:00.563 Memory Page Size Minimum: 4096 bytes 00:15:00.563 Memory Page Size Maximum: 4096 bytes 00:15:00.563 Persistent Memory Region: Not Supported 00:15:00.563 Optional Asynchronous Events Supported 00:15:00.563 Namespace Attribute Notices: Supported 00:15:00.563 Firmware Activation Notices: Not Supported 00:15:00.563 ANA Change Notices: Not Supported 00:15:00.563 PLE Aggregate Log Change Notices: Not Supported 00:15:00.563 LBA Status Info Alert Notices: Not Supported 00:15:00.563 EGE Aggregate Log Change Notices: Not Supported 00:15:00.563 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.563 Zone Descriptor Change Notices: Not Supported 00:15:00.563 Discovery Log Change Notices: Not Supported 00:15:00.563 Controller Attributes 00:15:00.563 128-bit Host Identifier: Supported 00:15:00.563 Non-Operational Permissive Mode: Not Supported 00:15:00.563 NVM Sets: Not Supported 00:15:00.563 Read Recovery Levels: Not Supported 00:15:00.563 Endurance Groups: Not Supported 00:15:00.563 Predictable Latency Mode: Not Supported 00:15:00.563 Traffic Based Keep ALive: Not Supported 00:15:00.563 Namespace Granularity: Not Supported 00:15:00.563 SQ Associations: Not Supported 00:15:00.563 UUID List: Not Supported 00:15:00.563 Multi-Domain Subsystem: Not Supported 00:15:00.563 Fixed Capacity Management: Not Supported 00:15:00.563 Variable Capacity Management: Not Supported 00:15:00.563 Delete Endurance Group: Not Supported 00:15:00.563 Delete NVM Set: Not Supported 00:15:00.563 Extended LBA Formats Supported: Not Supported 00:15:00.563 Flexible Data Placement Supported: Not Supported 00:15:00.563 00:15:00.563 Controller Memory Buffer Support 00:15:00.563 ================================ 00:15:00.563 Supported: No 00:15:00.563 00:15:00.563 Persistent Memory Region Support 00:15:00.563 ================================ 00:15:00.563 Supported: No 00:15:00.564 00:15:00.564 Admin Command Set Attributes 00:15:00.564 ============================ 00:15:00.564 Security Send/Receive: Not Supported 00:15:00.564 Format NVM: Not Supported 00:15:00.564 Firmware Activate/Download: Not Supported 00:15:00.564 Namespace Management: Not Supported 00:15:00.564 Device Self-Test: Not Supported 00:15:00.564 Directives: Not Supported 00:15:00.564 NVMe-MI: Not Supported 00:15:00.564 Virtualization Management: Not Supported 00:15:00.564 Doorbell Buffer Config: Not Supported 00:15:00.564 Get LBA Status Capability: Not Supported 00:15:00.564 Command & Feature Lockdown Capability: Not Supported 00:15:00.564 Abort Command Limit: 4 00:15:00.564 Async Event Request Limit: 4 00:15:00.564 Number of Firmware Slots: N/A 00:15:00.564 Firmware Slot 1 Read-Only: N/A 00:15:00.564 Firmware Activation Without Reset: N/A 00:15:00.564 Multiple Update Detection Support: N/A 00:15:00.564 Firmware Update Granularity: No Information Provided 00:15:00.564 Per-Namespace SMART Log: No 00:15:00.564 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.564 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:00.564 Command Effects Log Page: Supported 00:15:00.564 Get Log Page Extended Data: Supported 00:15:00.564 Telemetry Log Pages: Not Supported 00:15:00.564 Persistent Event Log Pages: Not Supported 00:15:00.564 Supported Log Pages Log Page: May Support 00:15:00.564 Commands Supported & Effects Log Page: Not Supported 00:15:00.564 Feature Identifiers & Effects Log Page:May Support 00:15:00.564 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.564 Data Area 4 for Telemetry Log: Not Supported 00:15:00.564 Error Log Page Entries Supported: 128 00:15:00.564 Keep Alive: Supported 00:15:00.564 Keep Alive Granularity: 10000 ms 00:15:00.564 00:15:00.564 NVM Command Set Attributes 00:15:00.564 ========================== 00:15:00.564 Submission Queue Entry Size 00:15:00.564 Max: 64 00:15:00.564 Min: 64 00:15:00.564 Completion Queue Entry Size 00:15:00.564 Max: 16 00:15:00.564 Min: 16 00:15:00.564 Number of Namespaces: 32 00:15:00.564 Compare Command: Supported 00:15:00.564 Write Uncorrectable Command: Not Supported 00:15:00.564 Dataset Management Command: Supported 00:15:00.564 Write Zeroes Command: Supported 00:15:00.564 Set Features Save Field: Not Supported 00:15:00.564 Reservations: Supported 00:15:00.564 Timestamp: Not Supported 00:15:00.564 Copy: Supported 00:15:00.564 Volatile Write Cache: Present 00:15:00.564 Atomic Write Unit (Normal): 1 00:15:00.564 Atomic Write Unit (PFail): 1 00:15:00.564 Atomic Compare & Write Unit: 1 00:15:00.564 Fused Compare & Write: Supported 00:15:00.564 Scatter-Gather List 00:15:00.564 SGL Command Set: Supported 00:15:00.564 SGL Keyed: Supported 00:15:00.564 SGL Bit Bucket Descriptor: Not Supported 00:15:00.564 SGL Metadata Pointer: Not Supported 00:15:00.564 Oversized SGL: Not Supported 00:15:00.564 SGL Metadata Address: Not Supported 00:15:00.564 SGL Offset: Supported 00:15:00.564 Transport SGL Data Block: Not Supported 00:15:00.564 Replay Protected Memory Block: Not Supported 00:15:00.564 00:15:00.564 Firmware Slot Information 00:15:00.564 ========================= 00:15:00.564 Active slot: 1 00:15:00.564 Slot 1 Firmware Revision: 25.01 00:15:00.564 00:15:00.564 00:15:00.564 Commands Supported and Effects 00:15:00.564 ============================== 00:15:00.564 Admin Commands 00:15:00.564 -------------- 00:15:00.564 Get Log Page (02h): Supported 00:15:00.564 Identify (06h): Supported 00:15:00.564 Abort (08h): Supported 00:15:00.564 Set Features (09h): Supported 00:15:00.564 Get Features (0Ah): Supported 00:15:00.564 Asynchronous Event Request (0Ch): Supported 00:15:00.564 Keep Alive (18h): Supported 00:15:00.564 I/O Commands 00:15:00.564 ------------ 00:15:00.564 Flush (00h): Supported LBA-Change 00:15:00.564 Write (01h): Supported LBA-Change 00:15:00.564 Read (02h): Supported 00:15:00.564 Compare (05h): Supported 00:15:00.564 Write Zeroes (08h): Supported LBA-Change 00:15:00.564 Dataset Management (09h): Supported LBA-Change 00:15:00.564 Copy (19h): Supported LBA-Change 00:15:00.564 00:15:00.564 Error Log 00:15:00.564 ========= 00:15:00.564 00:15:00.564 Arbitration 00:15:00.564 =========== 00:15:00.564 Arbitration Burst: 1 00:15:00.564 00:15:00.564 Power Management 00:15:00.564 ================ 00:15:00.564 Number of Power States: 1 00:15:00.564 Current Power State: Power State #0 00:15:00.564 Power State #0: 00:15:00.564 Max Power: 0.00 W 00:15:00.564 Non-Operational State: Operational 00:15:00.564 Entry Latency: Not Reported 00:15:00.564 Exit Latency: Not Reported 00:15:00.564 Relative Read Throughput: 0 00:15:00.564 Relative Read Latency: 0 00:15:00.564 Relative Write Throughput: 0 00:15:00.564 Relative Write Latency: 0 00:15:00.564 Idle Power: Not Reported 00:15:00.564 Active Power: Not Reported 00:15:00.564 Non-Operational Permissive Mode: Not Supported 00:15:00.564 00:15:00.564 Health Information 00:15:00.564 ================== 00:15:00.564 Critical Warnings: 00:15:00.564 Available Spare Space: OK 00:15:00.564 Temperature: OK 00:15:00.564 Device Reliability: OK 00:15:00.564 Read Only: No 00:15:00.564 Volatile Memory Backup: OK 00:15:00.564 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:00.564 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.564 Available Spare: 0% 00:15:00.564 Available Spare Threshold: 0% 00:15:00.564 Life Percentage Used:[2024-11-26 19:22:58.860629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.564 [2024-11-26 19:22:58.860634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212b1c0) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.860747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.564 [2024-11-26 19:22:58.860756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20c6750) 00:15:00.564 [2024-11-26 19:22:58.860765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.564 [2024-11-26 19:22:58.860791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212b1c0, cid 7, qid 0 00:15:00.564 [2024-11-26 19:22:58.860839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.564 [2024-11-26 19:22:58.860847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.564 [2024-11-26 19:22:58.860851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.564 [2024-11-26 19:22:58.860855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212b1c0) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.864935] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:00.564 [2024-11-26 19:22:58.864959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a740) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.864967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.564 [2024-11-26 19:22:58.864974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212a8c0) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.864979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.564 [2024-11-26 19:22:58.864984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212aa40) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.864989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.564 [2024-11-26 19:22:58.864994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.564 [2024-11-26 19:22:58.864999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.564 [2024-11-26 19:22:58.865010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.564 [2024-11-26 19:22:58.865015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865258] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:00.565 [2024-11-26 19:22:58.865263] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:00.565 [2024-11-26 19:22:58.865274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.865882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.865890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.865905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.865923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.865932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.865940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.865960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.866006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.866013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.866017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.866032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.866050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.866067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.866109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.866120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.866123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.866138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.866155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.866173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.866215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.866222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.565 [2024-11-26 19:22:58.866226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.565 [2024-11-26 19:22:58.866241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.565 [2024-11-26 19:22:58.866250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.565 [2024-11-26 19:22:58.866258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.565 [2024-11-26 19:22:58.866276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.565 [2024-11-26 19:22:58.866317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.565 [2024-11-26 19:22:58.866325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.866421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.866428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.866539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.866546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.866655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.866662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.866758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.866765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.866869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.866881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.866885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.866926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.866937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.866945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.866965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.566 [2024-11-26 19:22:58.867573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.566 [2024-11-26 19:22:58.867577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.566 [2024-11-26 19:22:58.867593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.566 [2024-11-26 19:22:58.867603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.566 [2024-11-26 19:22:58.867610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.566 [2024-11-26 19:22:58.867629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.566 [2024-11-26 19:22:58.867684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.867693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.867697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.867712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.867730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.867749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.867797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.867805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.867809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.867824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.867841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.867859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.867913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.867922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.867926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.867942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.867952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.867960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.867979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.868791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.868799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.868803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.868818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.868827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.868835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.868853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.567 [2024-11-26 19:22:58.872945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.567 [2024-11-26 19:22:58.872960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.567 [2024-11-26 19:22:58.872965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.872969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.567 [2024-11-26 19:22:58.872983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.872989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:00.567 [2024-11-26 19:22:58.872993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c6750) 00:15:00.567 [2024-11-26 19:22:58.873002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:00.567 [2024-11-26 19:22:58.873028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x212abc0, cid 3, qid 0 00:15:00.568 [2024-11-26 19:22:58.873078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:00.568 [2024-11-26 19:22:58.873085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:00.568 [2024-11-26 19:22:58.873089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:00.568 [2024-11-26 19:22:58.873093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x212abc0) on tqpair=0x20c6750 00:15:00.568 [2024-11-26 19:22:58.873102] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:15:00.568 0% 00:15:00.568 Data Units Read: 0 00:15:00.568 Data Units Written: 0 00:15:00.568 Host Read Commands: 0 00:15:00.568 Host Write Commands: 0 00:15:00.568 Controller Busy Time: 0 minutes 00:15:00.568 Power Cycles: 0 00:15:00.568 Power On Hours: 0 hours 00:15:00.568 Unsafe Shutdowns: 0 00:15:00.568 Unrecoverable Media Errors: 0 00:15:00.568 Lifetime Error Log Entries: 0 00:15:00.568 Warning Temperature Time: 0 minutes 00:15:00.568 Critical Temperature Time: 0 minutes 00:15:00.568 00:15:00.568 Number of Queues 00:15:00.568 ================ 00:15:00.568 Number of I/O Submission Queues: 127 00:15:00.568 Number of I/O Completion Queues: 127 00:15:00.568 00:15:00.568 Active Namespaces 00:15:00.568 ================= 00:15:00.568 Namespace ID:1 00:15:00.568 Error Recovery Timeout: Unlimited 00:15:00.568 Command Set Identifier: NVM (00h) 00:15:00.568 Deallocate: Supported 00:15:00.568 Deallocated/Unwritten Error: Not Supported 00:15:00.568 Deallocated Read Value: Unknown 00:15:00.568 Deallocate in Write Zeroes: Not Supported 00:15:00.568 Deallocated Guard Field: 0xFFFF 00:15:00.568 Flush: Supported 00:15:00.568 Reservation: Supported 00:15:00.568 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.568 Size (in LBAs): 131072 (0GiB) 00:15:00.568 Capacity (in LBAs): 131072 (0GiB) 00:15:00.568 Utilization (in LBAs): 131072 (0GiB) 00:15:00.568 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:00.568 EUI64: ABCDEF0123456789 00:15:00.568 UUID: 1c4849cc-53b2-434e-93d7-17e44f616cef 00:15:00.568 Thin Provisioning: Not Supported 00:15:00.568 Per-NS Atomic Units: Yes 00:15:00.568 Atomic Boundary Size (Normal): 0 00:15:00.568 Atomic Boundary Size (PFail): 0 00:15:00.568 Atomic Boundary Offset: 0 00:15:00.568 Maximum Single Source Range Length: 65535 00:15:00.568 Maximum Copy Length: 65535 00:15:00.568 Maximum Source Range Count: 1 00:15:00.568 NGUID/EUI64 Never Reused: No 00:15:00.568 Namespace Write Protected: No 00:15:00.568 Number of LBA Formats: 1 00:15:00.568 Current LBA Format: LBA Format #00 00:15:00.568 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.568 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:00.568 rmmod nvme_tcp 00:15:00.568 rmmod nvme_fabrics 00:15:00.568 rmmod nvme_keyring 00:15:00.568 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73853 ']' 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73853 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73853 ']' 00:15:00.827 19:22:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73853 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73853 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.827 killing process with pid 73853 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73853' 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73853 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73853 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.827 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.828 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.828 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:00.828 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.085 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.344 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:01.344 ************************************ 00:15:01.344 END TEST nvmf_identify 00:15:01.344 ************************************ 00:15:01.345 00:15:01.345 real 0m2.233s 00:15:01.345 user 0m4.473s 00:15:01.345 sys 0m0.742s 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.345 ************************************ 00:15:01.345 START TEST nvmf_perf 00:15:01.345 ************************************ 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:01.345 * Looking for test storage... 00:15:01.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.345 --rc genhtml_branch_coverage=1 00:15:01.345 --rc genhtml_function_coverage=1 00:15:01.345 --rc genhtml_legend=1 00:15:01.345 --rc geninfo_all_blocks=1 00:15:01.345 --rc geninfo_unexecuted_blocks=1 00:15:01.345 00:15:01.345 ' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.345 --rc genhtml_branch_coverage=1 00:15:01.345 --rc genhtml_function_coverage=1 00:15:01.345 --rc genhtml_legend=1 00:15:01.345 --rc geninfo_all_blocks=1 00:15:01.345 --rc geninfo_unexecuted_blocks=1 00:15:01.345 00:15:01.345 ' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.345 --rc genhtml_branch_coverage=1 00:15:01.345 --rc genhtml_function_coverage=1 00:15:01.345 --rc genhtml_legend=1 00:15:01.345 --rc geninfo_all_blocks=1 00:15:01.345 --rc geninfo_unexecuted_blocks=1 00:15:01.345 00:15:01.345 ' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.345 --rc genhtml_branch_coverage=1 00:15:01.345 --rc genhtml_function_coverage=1 00:15:01.345 --rc genhtml_legend=1 00:15:01.345 --rc geninfo_all_blocks=1 00:15:01.345 --rc geninfo_unexecuted_blocks=1 00:15:01.345 00:15:01.345 ' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.345 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.346 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.346 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:01.605 Cannot find device "nvmf_init_br" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:01.605 Cannot find device "nvmf_init_br2" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:01.605 Cannot find device "nvmf_tgt_br" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.605 Cannot find device "nvmf_tgt_br2" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:01.605 Cannot find device "nvmf_init_br" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:01.605 Cannot find device "nvmf_init_br2" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:01.605 Cannot find device "nvmf_tgt_br" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:01.605 Cannot find device "nvmf_tgt_br2" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:01.605 Cannot find device "nvmf_br" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:01.605 Cannot find device "nvmf_init_if" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:01.605 Cannot find device "nvmf_init_if2" 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:01.605 19:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:01.605 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:01.605 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:01.605 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:01.605 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:01.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:01.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:15:01.865 00:15:01.865 --- 10.0.0.3 ping statistics --- 00:15:01.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.865 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:01.865 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:01.865 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:15:01.865 00:15:01.865 --- 10.0.0.4 ping statistics --- 00:15:01.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.865 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:01.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:01.865 00:15:01.865 --- 10.0.0.1 ping statistics --- 00:15:01.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.865 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:01.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:01.865 00:15:01.865 --- 10.0.0.2 ping statistics --- 00:15:01.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.865 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74102 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74102 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74102 ']' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.865 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.865 [2024-11-26 19:23:00.259878] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:01.866 [2024-11-26 19:23:00.259994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.125 [2024-11-26 19:23:00.406961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.125 [2024-11-26 19:23:00.454076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.125 [2024-11-26 19:23:00.454136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.125 [2024-11-26 19:23:00.454162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.125 [2024-11-26 19:23:00.454170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.125 [2024-11-26 19:23:00.454176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.125 [2024-11-26 19:23:00.455397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.125 [2024-11-26 19:23:00.455537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.125 [2024-11-26 19:23:00.455633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.125 [2024-11-26 19:23:00.455635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.125 [2024-11-26 19:23:00.508765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:02.385 19:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:02.645 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:02.645 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:03.215 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:03.215 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.475 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:03.475 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:03.475 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:03.475 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:03.475 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:03.475 [2024-11-26 19:23:01.908951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.733 19:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.733 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:03.734 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.302 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:04.302 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:04.302 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:04.562 [2024-11-26 19:23:02.898325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:04.562 19:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:04.821 19:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:04.821 19:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:04.821 19:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:04.821 19:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:06.199 Initializing NVMe Controllers 00:15:06.199 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:06.199 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:06.199 Initialization complete. Launching workers. 00:15:06.199 ======================================================== 00:15:06.199 Latency(us) 00:15:06.199 Device Information : IOPS MiB/s Average min max 00:15:06.199 PCIE (0000:00:10.0) NSID 1 from core 0: 22284.23 87.05 1436.16 387.96 7938.70 00:15:06.199 ======================================================== 00:15:06.199 Total : 22284.23 87.05 1436.16 387.96 7938.70 00:15:06.199 00:15:06.199 19:23:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:07.136 Initializing NVMe Controllers 00:15:07.136 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.136 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:07.136 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:07.136 Initialization complete. Launching workers. 00:15:07.136 ======================================================== 00:15:07.136 Latency(us) 00:15:07.136 Device Information : IOPS MiB/s Average min max 00:15:07.136 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3923.19 15.32 254.53 96.15 4275.74 00:15:07.136 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 127.49 0.50 7905.68 1971.46 12039.51 00:15:07.136 ======================================================== 00:15:07.136 Total : 4050.67 15.82 495.33 96.15 12039.51 00:15:07.136 00:15:07.395 19:23:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:08.773 Initializing NVMe Controllers 00:15:08.773 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.773 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.773 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.773 Initialization complete. Launching workers. 00:15:08.773 ======================================================== 00:15:08.773 Latency(us) 00:15:08.773 Device Information : IOPS MiB/s Average min max 00:15:08.773 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9244.01 36.11 3462.25 543.53 9941.45 00:15:08.773 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3949.15 15.43 8116.33 5096.75 16280.89 00:15:08.773 ======================================================== 00:15:08.773 Total : 13193.16 51.54 4855.37 543.53 16280.89 00:15:08.773 00:15:08.773 19:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:08.773 19:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:11.309 Initializing NVMe Controllers 00:15:11.309 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.309 Controller IO queue size 128, less than required. 00:15:11.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.309 Controller IO queue size 128, less than required. 00:15:11.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.309 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:11.309 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:11.309 Initialization complete. Launching workers. 00:15:11.309 ======================================================== 00:15:11.309 Latency(us) 00:15:11.309 Device Information : IOPS MiB/s Average min max 00:15:11.310 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1878.94 469.73 69237.23 35694.56 113583.04 00:15:11.310 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 668.09 167.02 193138.48 63375.91 306005.95 00:15:11.310 ======================================================== 00:15:11.310 Total : 2547.03 636.76 101736.71 35694.56 306005.95 00:15:11.310 00:15:11.310 19:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:11.569 Initializing NVMe Controllers 00:15:11.569 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.569 Controller IO queue size 128, less than required. 00:15:11.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:11.569 Controller IO queue size 128, less than required. 00:15:11.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:11.569 WARNING: Some requested NVMe devices were skipped 00:15:11.569 No valid NVMe controllers or AIO or URING devices found 00:15:11.569 19:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:14.105 Initializing NVMe Controllers 00:15:14.105 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.105 Controller IO queue size 128, less than required. 00:15:14.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.105 Controller IO queue size 128, less than required. 00:15:14.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:14.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:14.105 Initialization complete. Launching workers. 00:15:14.105 00:15:14.105 ==================== 00:15:14.105 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:14.105 TCP transport: 00:15:14.105 polls: 9507 00:15:14.105 idle_polls: 5518 00:15:14.105 sock_completions: 3989 00:15:14.105 nvme_completions: 6529 00:15:14.105 submitted_requests: 9800 00:15:14.105 queued_requests: 1 00:15:14.105 00:15:14.105 ==================== 00:15:14.105 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:14.105 TCP transport: 00:15:14.105 polls: 12472 00:15:14.105 idle_polls: 8505 00:15:14.105 sock_completions: 3967 00:15:14.105 nvme_completions: 6681 00:15:14.105 submitted_requests: 10072 00:15:14.105 queued_requests: 1 00:15:14.105 ======================================================== 00:15:14.105 Latency(us) 00:15:14.105 Device Information : IOPS MiB/s Average min max 00:15:14.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1628.03 407.01 79487.04 37681.77 131378.29 00:15:14.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1665.94 416.48 77918.54 38107.52 130319.93 00:15:14.105 ======================================================== 00:15:14.105 Total : 3293.96 823.49 78693.76 37681.77 131378.29 00:15:14.105 00:15:14.105 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:14.105 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.364 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:14.365 rmmod nvme_tcp 00:15:14.365 rmmod nvme_fabrics 00:15:14.365 rmmod nvme_keyring 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74102 ']' 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74102 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74102 ']' 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74102 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.365 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74102 00:15:14.624 killing process with pid 74102 00:15:14.624 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.624 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.624 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74102' 00:15:14.624 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74102 00:15:14.624 19:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74102 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:14.884 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:15.143 00:15:15.143 real 0m13.984s 00:15:15.143 user 0m50.392s 00:15:15.143 sys 0m3.983s 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.143 19:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.143 ************************************ 00:15:15.143 END TEST nvmf_perf 00:15:15.143 ************************************ 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.432 ************************************ 00:15:15.432 START TEST nvmf_fio_host 00:15:15.432 ************************************ 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:15.432 * Looking for test storage... 00:15:15.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:15.432 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:15.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.433 --rc genhtml_branch_coverage=1 00:15:15.433 --rc genhtml_function_coverage=1 00:15:15.433 --rc genhtml_legend=1 00:15:15.433 --rc geninfo_all_blocks=1 00:15:15.433 --rc geninfo_unexecuted_blocks=1 00:15:15.433 00:15:15.433 ' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:15.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.433 --rc genhtml_branch_coverage=1 00:15:15.433 --rc genhtml_function_coverage=1 00:15:15.433 --rc genhtml_legend=1 00:15:15.433 --rc geninfo_all_blocks=1 00:15:15.433 --rc geninfo_unexecuted_blocks=1 00:15:15.433 00:15:15.433 ' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:15.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.433 --rc genhtml_branch_coverage=1 00:15:15.433 --rc genhtml_function_coverage=1 00:15:15.433 --rc genhtml_legend=1 00:15:15.433 --rc geninfo_all_blocks=1 00:15:15.433 --rc geninfo_unexecuted_blocks=1 00:15:15.433 00:15:15.433 ' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:15.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.433 --rc genhtml_branch_coverage=1 00:15:15.433 --rc genhtml_function_coverage=1 00:15:15.433 --rc genhtml_legend=1 00:15:15.433 --rc geninfo_all_blocks=1 00:15:15.433 --rc geninfo_unexecuted_blocks=1 00:15:15.433 00:15:15.433 ' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:15.433 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:15.433 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:15.434 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:15.717 Cannot find device "nvmf_init_br" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:15.717 Cannot find device "nvmf_init_br2" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:15.717 Cannot find device "nvmf_tgt_br" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.717 Cannot find device "nvmf_tgt_br2" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:15.717 Cannot find device "nvmf_init_br" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:15.717 Cannot find device "nvmf_init_br2" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:15.717 Cannot find device "nvmf_tgt_br" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:15.717 Cannot find device "nvmf_tgt_br2" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:15.717 Cannot find device "nvmf_br" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:15.717 Cannot find device "nvmf_init_if" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:15.717 Cannot find device "nvmf_init_if2" 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:15.717 19:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.717 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.718 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:15.718 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:15.718 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.718 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:15.718 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:15.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:15:15.977 00:15:15.977 --- 10.0.0.3 ping statistics --- 00:15:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.977 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:15.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:15.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:15:15.977 00:15:15.977 --- 10.0.0.4 ping statistics --- 00:15:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.977 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:15.977 00:15:15.977 --- 10.0.0.1 ping statistics --- 00:15:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.977 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:15.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:15.977 00:15:15.977 --- 10.0.0.2 ping statistics --- 00:15:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.977 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74558 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74558 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74558 ']' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.977 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.977 [2024-11-26 19:23:14.296013] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:15.977 [2024-11-26 19:23:14.296102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.236 [2024-11-26 19:23:14.447755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.237 [2024-11-26 19:23:14.501956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.237 [2024-11-26 19:23:14.502008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.237 [2024-11-26 19:23:14.502022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.237 [2024-11-26 19:23:14.502033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.237 [2024-11-26 19:23:14.502042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.237 [2024-11-26 19:23:14.503307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.237 [2024-11-26 19:23:14.503464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.237 [2024-11-26 19:23:14.503584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.237 [2024-11-26 19:23:14.503585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.237 [2024-11-26 19:23:14.558244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.237 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.237 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:16.237 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:16.496 [2024-11-26 19:23:14.840069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.496 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:16.496 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:16.496 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.496 19:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.063 Malloc1 00:15:17.063 19:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.063 19:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.629 19:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:17.629 [2024-11-26 19:23:16.055305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:17.887 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:18.146 19:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:18.146 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:18.146 fio-3.35 00:15:18.146 Starting 1 thread 00:15:20.682 00:15:20.682 test: (groupid=0, jobs=1): err= 0: pid=74632: Tue Nov 26 19:23:18 2024 00:15:20.682 read: IOPS=9400, BW=36.7MiB/s (38.5MB/s)(73.7MiB/2006msec) 00:15:20.682 slat (nsec): min=1874, max=1307.0k, avg=2405.62, stdev=10087.01 00:15:20.682 clat (usec): min=2680, max=12790, avg=7088.67, stdev=520.42 00:15:20.682 lat (usec): min=2726, max=12793, avg=7091.08, stdev=520.26 00:15:20.682 clat percentiles (usec): 00:15:20.682 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:15:20.682 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7177], 00:15:20.682 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:15:20.682 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10945], 99.95th=[11994], 00:15:20.682 | 99.99th=[12780] 00:15:20.682 bw ( KiB/s): min=36648, max=38424, per=99.93%, avg=37576.00, stdev=807.62, samples=4 00:15:20.682 iops : min= 9162, max= 9606, avg=9394.00, stdev=201.90, samples=4 00:15:20.682 write: IOPS=9400, BW=36.7MiB/s (38.5MB/s)(73.7MiB/2006msec); 0 zone resets 00:15:20.682 slat (nsec): min=1968, max=289956, avg=2468.53, stdev=2623.72 00:15:20.682 clat (usec): min=2515, max=12373, avg=6471.65, stdev=468.61 00:15:20.682 lat (usec): min=2529, max=12375, avg=6474.12, stdev=468.57 00:15:20.682 clat percentiles (usec): 00:15:20.682 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6128], 00:15:20.682 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:15:20.682 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:15:20.682 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[10028], 99.95th=[10945], 00:15:20.682 | 99.99th=[12256] 00:15:20.682 bw ( KiB/s): min=37440, max=38016, per=100.00%, avg=37602.00, stdev=278.08, samples=4 00:15:20.682 iops : min= 9360, max= 9504, avg=9400.50, stdev=69.52, samples=4 00:15:20.682 lat (msec) : 4=0.07%, 10=99.80%, 20=0.12% 00:15:20.682 cpu : usr=69.03%, sys=23.54%, ctx=27, majf=0, minf=7 00:15:20.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:20.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.682 issued rwts: total=18858,18858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.682 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.682 00:15:20.682 Run status group 0 (all jobs): 00:15:20.682 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.7MiB (77.2MB), run=2006-2006msec 00:15:20.682 WRITE: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.7MiB (77.2MB), run=2006-2006msec 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:20.682 19:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.682 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:20.682 fio-3.35 00:15:20.682 Starting 1 thread 00:15:23.313 00:15:23.313 test: (groupid=0, jobs=1): err= 0: pid=74676: Tue Nov 26 19:23:21 2024 00:15:23.313 read: IOPS=8905, BW=139MiB/s (146MB/s)(279MiB/2007msec) 00:15:23.313 slat (usec): min=2, max=138, avg= 3.49, stdev= 2.44 00:15:23.313 clat (usec): min=2188, max=16110, avg=8131.81, stdev=2442.09 00:15:23.313 lat (usec): min=2191, max=16113, avg=8135.30, stdev=2442.15 00:15:23.313 clat percentiles (usec): 00:15:23.313 | 1.00th=[ 3884], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5866], 00:15:23.313 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:15:23.313 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11338], 95.00th=[12518], 00:15:23.313 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15795], 99.95th=[15926], 00:15:23.313 | 99.99th=[16057] 00:15:23.313 bw ( KiB/s): min=65440, max=79520, per=49.61%, avg=70696.00, stdev=6129.56, samples=4 00:15:23.313 iops : min= 4090, max= 4970, avg=4418.50, stdev=383.10, samples=4 00:15:23.313 write: IOPS=5134, BW=80.2MiB/s (84.1MB/s)(144MiB/1794msec); 0 zone resets 00:15:23.313 slat (usec): min=31, max=386, avg=35.85, stdev= 9.54 00:15:23.313 clat (usec): min=5158, max=19905, avg=11380.11, stdev=2002.14 00:15:23.313 lat (usec): min=5191, max=19949, avg=11415.96, stdev=2002.31 00:15:23.313 clat percentiles (usec): 00:15:23.313 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:15:23.313 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:15:23.313 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[14877], 00:15:23.313 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[19268], 00:15:23.313 | 99.99th=[19792] 00:15:23.313 bw ( KiB/s): min=68224, max=81856, per=89.70%, avg=73688.00, stdev=5786.57, samples=4 00:15:23.313 iops : min= 4264, max= 5116, avg=4605.50, stdev=361.66, samples=4 00:15:23.313 lat (msec) : 4=0.87%, 10=59.14%, 20=39.99% 00:15:23.313 cpu : usr=81.61%, sys=13.91%, ctx=3, majf=0, minf=8 00:15:23.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:23.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.313 issued rwts: total=17874,9211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.313 00:15:23.313 Run status group 0 (all jobs): 00:15:23.313 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=279MiB (293MB), run=2007-2007msec 00:15:23.313 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=144MiB (151MB), run=1794-1794msec 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.313 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.313 rmmod nvme_tcp 00:15:23.313 rmmod nvme_fabrics 00:15:23.313 rmmod nvme_keyring 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74558 ']' 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74558 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74558 ']' 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74558 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74558 00:15:23.572 killing process with pid 74558 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74558' 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74558 00:15:23.572 19:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74558 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.572 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.831 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.831 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.831 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.831 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:23.832 00:15:23.832 real 0m8.636s 00:15:23.832 user 0m34.377s 00:15:23.832 sys 0m2.425s 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.832 ************************************ 00:15:23.832 END TEST nvmf_fio_host 00:15:23.832 19:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.832 ************************************ 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.092 ************************************ 00:15:24.092 START TEST nvmf_failover 00:15:24.092 ************************************ 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.092 * Looking for test storage... 00:15:24.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.092 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.093 --rc genhtml_branch_coverage=1 00:15:24.093 --rc genhtml_function_coverage=1 00:15:24.093 --rc genhtml_legend=1 00:15:24.093 --rc geninfo_all_blocks=1 00:15:24.093 --rc geninfo_unexecuted_blocks=1 00:15:24.093 00:15:24.093 ' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.093 --rc genhtml_branch_coverage=1 00:15:24.093 --rc genhtml_function_coverage=1 00:15:24.093 --rc genhtml_legend=1 00:15:24.093 --rc geninfo_all_blocks=1 00:15:24.093 --rc geninfo_unexecuted_blocks=1 00:15:24.093 00:15:24.093 ' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.093 --rc genhtml_branch_coverage=1 00:15:24.093 --rc genhtml_function_coverage=1 00:15:24.093 --rc genhtml_legend=1 00:15:24.093 --rc geninfo_all_blocks=1 00:15:24.093 --rc geninfo_unexecuted_blocks=1 00:15:24.093 00:15:24.093 ' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.093 --rc genhtml_branch_coverage=1 00:15:24.093 --rc genhtml_function_coverage=1 00:15:24.093 --rc genhtml_legend=1 00:15:24.093 --rc geninfo_all_blocks=1 00:15:24.093 --rc geninfo_unexecuted_blocks=1 00:15:24.093 00:15:24.093 ' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.093 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.093 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:24.094 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:24.352 Cannot find device "nvmf_init_br" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:24.353 Cannot find device "nvmf_init_br2" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:24.353 Cannot find device "nvmf_tgt_br" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.353 Cannot find device "nvmf_tgt_br2" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:24.353 Cannot find device "nvmf_init_br" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:24.353 Cannot find device "nvmf_init_br2" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:24.353 Cannot find device "nvmf_tgt_br" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:24.353 Cannot find device "nvmf_tgt_br2" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:24.353 Cannot find device "nvmf_br" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:24.353 Cannot find device "nvmf_init_if" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:24.353 Cannot find device "nvmf_init_if2" 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.353 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:24.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:15:24.612 00:15:24.612 --- 10.0.0.3 ping statistics --- 00:15:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.612 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:24.612 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:24.612 00:15:24.612 --- 10.0.0.4 ping statistics --- 00:15:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.612 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:24.612 00:15:24.612 --- 10.0.0.1 ping statistics --- 00:15:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.612 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:24.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:15:24.612 00:15:24.612 --- 10.0.0.2 ping statistics --- 00:15:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.612 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74957 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74957 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74957 ']' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.612 19:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:24.612 [2024-11-26 19:23:22.946209] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:24.612 [2024-11-26 19:23:22.946291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.871 [2024-11-26 19:23:23.096111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:24.871 [2024-11-26 19:23:23.150613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.871 [2024-11-26 19:23:23.150994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.871 [2024-11-26 19:23:23.151247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.871 [2024-11-26 19:23:23.151483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.871 [2024-11-26 19:23:23.151503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.871 [2024-11-26 19:23:23.152767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.871 [2024-11-26 19:23:23.153298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.871 [2024-11-26 19:23:23.153314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.871 [2024-11-26 19:23:23.210499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.871 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.871 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:24.871 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.871 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.871 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:25.130 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.130 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:25.389 [2024-11-26 19:23:23.607910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.389 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:25.648 Malloc0 00:15:25.648 19:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.907 19:23:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.166 19:23:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:26.424 [2024-11-26 19:23:24.771843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.424 19:23:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:26.683 [2024-11-26 19:23:25.016119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:26.683 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:26.941 [2024-11-26 19:23:25.260330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:26.941 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75007 00:15:26.941 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:26.941 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.941 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75007 /var/tmp/bdevperf.sock 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75007 ']' 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.942 19:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.877 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.877 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:27.877 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.444 NVMe0n1 00:15:28.444 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.701 00:15:28.701 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75031 00:15:28.701 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.701 19:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:29.636 19:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.896 [2024-11-26 19:23:28.144664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.896 [2024-11-26 19:23:28.144784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.144998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.897 [2024-11-26 19:23:28.145484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 [2024-11-26 19:23:28.145619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14edd30 is same with the state(6) to be set 00:15:29.898 19:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:33.187 19:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:33.187 00:15:33.187 19:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:33.447 19:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:36.737 19:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:36.737 [2024-11-26 19:23:35.056416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.737 19:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:37.674 19:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:37.935 [2024-11-26 19:23:36.295294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.296930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.297912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.298881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.299992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.300951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.301733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.301829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.301931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.935 [2024-11-26 19:23:36.302470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.302562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.302648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.302735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.302821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.302908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.303922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.304988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.305974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.306851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.307883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 [2024-11-26 19:23:36.308672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ec660 is same with the state(6) to be set 00:15:37.936 19:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75031 00:15:44.511 { 00:15:44.511 "results": [ 00:15:44.511 { 00:15:44.511 "job": "NVMe0n1", 00:15:44.511 "core_mask": "0x1", 00:15:44.511 "workload": "verify", 00:15:44.511 "status": "finished", 00:15:44.511 "verify_range": { 00:15:44.511 "start": 0, 00:15:44.511 "length": 16384 00:15:44.511 }, 00:15:44.511 "queue_depth": 128, 00:15:44.511 "io_size": 4096, 00:15:44.511 "runtime": 15.007003, 00:15:44.511 "iops": 8771.571512313285, 00:15:44.511 "mibps": 34.26395121997377, 00:15:44.511 "io_failed": 3485, 00:15:44.511 "io_timeout": 0, 00:15:44.511 "avg_latency_us": 14186.793102750416, 00:15:44.511 "min_latency_us": 685.1490909090909, 00:15:44.511 "max_latency_us": 35031.97090909091 00:15:44.511 } 00:15:44.511 ], 00:15:44.511 "core_count": 1 00:15:44.511 } 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75007 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75007 ']' 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75007 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75007 00:15:44.511 killing process with pid 75007 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75007' 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75007 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75007 00:15:44.511 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:44.511 [2024-11-26 19:23:25.327377] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:44.511 [2024-11-26 19:23:25.327471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:15:44.511 [2024-11-26 19:23:25.476041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.511 [2024-11-26 19:23:25.535370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.511 [2024-11-26 19:23:25.593761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.511 Running I/O for 15 seconds... 00:15:44.511 7588.00 IOPS, 29.64 MiB/s [2024-11-26T19:23:42.951Z] [2024-11-26 19:23:28.145669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.511 [2024-11-26 19:23:28.145708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.511 [2024-11-26 19:23:28.145733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.511 [2024-11-26 19:23:28.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.511 [2024-11-26 19:23:28.145765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.511 [2024-11-26 19:23:28.145779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.511 [2024-11-26 19:23:28.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.511 [2024-11-26 19:23:28.145807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.511 [2024-11-26 19:23:28.145822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.511 [2024-11-26 19:23:28.145836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.145850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.145864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.145879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.145892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.145924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.145953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.145967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.145995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.512 [2024-11-26 19:23:28.146882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.512 [2024-11-26 19:23:28.146895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.146910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.146936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.146954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.146982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.146997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.147954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.147999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.513 [2024-11-26 19:23:28.148202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.513 [2024-11-26 19:23:28.148216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.148981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.148995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.514 [2024-11-26 19:23:28.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.514 [2024-11-26 19:23:28.149210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.515 [2024-11-26 19:23:28.149239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.515 [2024-11-26 19:23:28.149267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.515 [2024-11-26 19:23:28.149302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:28.149792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.515 [2024-11-26 19:23:28.149825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.149839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957fe0 is same with the state(6) to be set 00:15:44.515 [2024-11-26 19:23:28.149858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.515 [2024-11-26 19:23:28.149868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.515 [2024-11-26 19:23:28.149879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:15:44.515 [2024-11-26 19:23:28.149892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.150003] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:44.515 [2024-11-26 19:23:28.150062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.515 [2024-11-26 19:23:28.150084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.150099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.515 [2024-11-26 19:23:28.150111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.150124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.515 [2024-11-26 19:23:28.150137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.150150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.515 [2024-11-26 19:23:28.150163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:28.150176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:44.515 [2024-11-26 19:23:28.153665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:44.515 [2024-11-26 19:23:28.153702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8c60 (9): Bad file descriptor 00:15:44.515 [2024-11-26 19:23:28.179586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:44.515 8453.50 IOPS, 33.02 MiB/s [2024-11-26T19:23:42.955Z] 8963.67 IOPS, 35.01 MiB/s [2024-11-26T19:23:42.955Z] 9226.75 IOPS, 36.04 MiB/s [2024-11-26T19:23:42.955Z] [2024-11-26 19:23:31.777373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.515 [2024-11-26 19:23:31.777784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.515 [2024-11-26 19:23:31.777800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.777814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.777829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.777859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.777881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.777897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.777941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.777974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.777989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.516 [2024-11-26 19:23:31.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.516 [2024-11-26 19:23:31.778889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.516 [2024-11-26 19:23:31.778943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.778958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.778974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.778989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.517 [2024-11-26 19:23:31.779548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.779966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.779983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.517 [2024-11-26 19:23:31.780237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.517 [2024-11-26 19:23:31.780253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.780872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.780975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.781004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.781033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.781061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.781090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.518 [2024-11-26 19:23:31.781118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.518 [2024-11-26 19:23:31.781332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95c370 is same with the state(6) to be set 00:15:44.518 [2024-11-26 19:23:31.781362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.518 [2024-11-26 19:23:31.781373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.518 [2024-11-26 19:23:31.781383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99968 len:8 PRP1 0x0 PRP2 0x0 00:15:44.518 [2024-11-26 19:23:31.781397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.518 [2024-11-26 19:23:31.781421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.518 [2024-11-26 19:23:31.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:15:44.518 [2024-11-26 19:23:31.781445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.518 [2024-11-26 19:23:31.781459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.518 [2024-11-26 19:23:31.781468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.518 [2024-11-26 19:23:31.781478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:15:44.518 [2024-11-26 19:23:31.781492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.519 [2024-11-26 19:23:31.781758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.519 [2024-11-26 19:23:31.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:15:44.519 [2024-11-26 19:23:31.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781841] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:44.519 [2024-11-26 19:23:31.781913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:31.781937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:31.781964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.781978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:31.781991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.782005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:31.782018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:31.782031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:44.519 [2024-11-26 19:23:31.785656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:44.519 [2024-11-26 19:23:31.785707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8c60 (9): Bad file descriptor 00:15:44.519 [2024-11-26 19:23:31.815619] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:44.519 9090.80 IOPS, 35.51 MiB/s [2024-11-26T19:23:42.959Z] 8651.00 IOPS, 33.79 MiB/s [2024-11-26T19:23:42.959Z] 8360.86 IOPS, 32.66 MiB/s [2024-11-26T19:23:42.959Z] 8126.25 IOPS, 31.74 MiB/s [2024-11-26T19:23:42.959Z] 7915.33 IOPS, 30.92 MiB/s [2024-11-26T19:23:42.959Z] [2024-11-26 19:23:36.300951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:36.301010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.301040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:36.301063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.301079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:36.301093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.301108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.519 [2024-11-26 19:23:36.301123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.301138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8c60 is same with the state(6) to be set 00:15:44.519 [2024-11-26 19:23:36.308814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.308861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.308920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.308957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.308976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.308991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.519 [2024-11-26 19:23:36.309517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.519 [2024-11-26 19:23:36.309532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.309966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.309993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.520 [2024-11-26 19:23:36.310748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.520 [2024-11-26 19:23:36.310762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.310966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.310994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.311980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.521 [2024-11-26 19:23:36.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.521 [2024-11-26 19:23:36.312245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.312980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.312997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.313012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.313053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.522 [2024-11-26 19:23:36.313091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:44.522 [2024-11-26 19:23:36.313336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97db0 is same with the state(6) to be set 00:15:44.522 [2024-11-26 19:23:36.313365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:44.522 [2024-11-26 19:23:36.313374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:44.522 [2024-11-26 19:23:36.313384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:15:44.522 [2024-11-26 19:23:36.313397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.522 [2024-11-26 19:23:36.313455] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:44.522 [2024-11-26 19:23:36.313473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:44.522 [2024-11-26 19:23:36.313501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8c60 (9): Bad file descriptor 00:15:44.522 [2024-11-26 19:23:36.317527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:44.522 [2024-11-26 19:23:36.343392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:44.522 7981.20 IOPS, 31.18 MiB/s [2024-11-26T19:23:42.962Z] 8183.36 IOPS, 31.97 MiB/s [2024-11-26T19:23:42.962Z] 8362.75 IOPS, 32.67 MiB/s [2024-11-26T19:23:42.962Z] 8517.00 IOPS, 33.27 MiB/s [2024-11-26T19:23:42.962Z] 8645.79 IOPS, 33.77 MiB/s [2024-11-26T19:23:42.962Z] 8769.13 IOPS, 34.25 MiB/s 00:15:44.522 Latency(us) 00:15:44.522 [2024-11-26T19:23:42.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.522 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:44.522 Verification LBA range: start 0x0 length 0x4000 00:15:44.522 NVMe0n1 : 15.01 8771.57 34.26 232.22 0.00 14186.79 685.15 35031.97 00:15:44.522 [2024-11-26T19:23:42.963Z] =================================================================================================================== 00:15:44.523 [2024-11-26T19:23:42.963Z] Total : 8771.57 34.26 232.22 0.00 14186.79 685.15 35031.97 00:15:44.523 Received shutdown signal, test time was about 15.000000 seconds 00:15:44.523 00:15:44.523 Latency(us) 00:15:44.523 [2024-11-26T19:23:42.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.523 [2024-11-26T19:23:42.963Z] =================================================================================================================== 00:15:44.523 [2024-11-26T19:23:42.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75210 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75210 /var/tmp/bdevperf.sock 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75210 ']' 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:44.523 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:44.781 [2024-11-26 19:23:42.958825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:44.781 19:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:44.781 [2024-11-26 19:23:43.214987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:45.039 19:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:45.298 NVMe0n1 00:15:45.298 19:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:45.557 00:15:45.557 19:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:45.816 00:15:45.816 19:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.816 19:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:46.075 19:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.333 19:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:49.618 19:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:49.618 19:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:49.618 19:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75284 00:15:49.618 19:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:49.618 19:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75284 00:15:51.022 { 00:15:51.022 "results": [ 00:15:51.022 { 00:15:51.022 "job": "NVMe0n1", 00:15:51.022 "core_mask": "0x1", 00:15:51.022 "workload": "verify", 00:15:51.022 "status": "finished", 00:15:51.022 "verify_range": { 00:15:51.022 "start": 0, 00:15:51.022 "length": 16384 00:15:51.022 }, 00:15:51.022 "queue_depth": 128, 00:15:51.022 "io_size": 4096, 00:15:51.022 "runtime": 1.007493, 00:15:51.022 "iops": 7643.725564346353, 00:15:51.022 "mibps": 29.858302985727942, 00:15:51.022 "io_failed": 0, 00:15:51.022 "io_timeout": 0, 00:15:51.022 "avg_latency_us": 16680.99541830459, 00:15:51.022 "min_latency_us": 2100.130909090909, 00:15:51.022 "max_latency_us": 14834.967272727272 00:15:51.022 } 00:15:51.022 ], 00:15:51.022 "core_count": 1 00:15:51.022 } 00:15:51.022 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:51.022 [2024-11-26 19:23:42.370621] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:51.022 [2024-11-26 19:23:42.370746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75210 ] 00:15:51.022 [2024-11-26 19:23:42.516890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.022 [2024-11-26 19:23:42.562408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.022 [2024-11-26 19:23:42.617140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.022 [2024-11-26 19:23:44.659061] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:51.022 [2024-11-26 19:23:44.659159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.022 [2024-11-26 19:23:44.659183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.022 [2024-11-26 19:23:44.659199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.023 [2024-11-26 19:23:44.659211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.023 [2024-11-26 19:23:44.659225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.023 [2024-11-26 19:23:44.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.023 [2024-11-26 19:23:44.659250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.023 [2024-11-26 19:23:44.659262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.023 [2024-11-26 19:23:44.659274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:51.023 [2024-11-26 19:23:44.659313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:51.023 [2024-11-26 19:23:44.659343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x599c60 (9): Bad file descriptor 00:15:51.023 [2024-11-26 19:23:44.667051] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:51.023 Running I/O for 1 seconds... 00:15:51.023 7573.00 IOPS, 29.58 MiB/s 00:15:51.023 Latency(us) 00:15:51.023 [2024-11-26T19:23:49.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.023 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.023 Verification LBA range: start 0x0 length 0x4000 00:15:51.023 NVMe0n1 : 1.01 7643.73 29.86 0.00 0.00 16681.00 2100.13 14834.97 00:15:51.023 [2024-11-26T19:23:49.463Z] =================================================================================================================== 00:15:51.023 [2024-11-26T19:23:49.463Z] Total : 7643.73 29.86 0.00 0.00 16681.00 2100.13 14834.97 00:15:51.023 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:51.023 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.023 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.590 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.590 19:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:51.590 19:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.849 19:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75210 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75210 ']' 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75210 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.146 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75210 00:15:55.418 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.418 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.418 killing process with pid 75210 00:15:55.418 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75210' 00:15:55.418 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75210 00:15:55.419 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75210 00:15:55.419 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:55.419 19:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.690 rmmod nvme_tcp 00:15:55.690 rmmod nvme_fabrics 00:15:55.690 rmmod nvme_keyring 00:15:55.690 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74957 ']' 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74957 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74957 ']' 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74957 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74957 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:55.948 killing process with pid 74957 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74957' 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74957 00:15:55.948 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74957 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.206 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:56.463 ************************************ 00:15:56.463 END TEST nvmf_failover 00:15:56.463 ************************************ 00:15:56.463 00:15:56.463 real 0m32.411s 00:15:56.463 user 2m4.840s 00:15:56.463 sys 0m5.665s 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.463 19:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.464 ************************************ 00:15:56.464 START TEST nvmf_host_discovery 00:15:56.464 ************************************ 00:15:56.464 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:56.464 * Looking for test storage... 00:15:56.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.464 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.464 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.464 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.722 --rc genhtml_branch_coverage=1 00:15:56.722 --rc genhtml_function_coverage=1 00:15:56.722 --rc genhtml_legend=1 00:15:56.722 --rc geninfo_all_blocks=1 00:15:56.722 --rc geninfo_unexecuted_blocks=1 00:15:56.722 00:15:56.722 ' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.722 --rc genhtml_branch_coverage=1 00:15:56.722 --rc genhtml_function_coverage=1 00:15:56.722 --rc genhtml_legend=1 00:15:56.722 --rc geninfo_all_blocks=1 00:15:56.722 --rc geninfo_unexecuted_blocks=1 00:15:56.722 00:15:56.722 ' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.722 --rc genhtml_branch_coverage=1 00:15:56.722 --rc genhtml_function_coverage=1 00:15:56.722 --rc genhtml_legend=1 00:15:56.722 --rc geninfo_all_blocks=1 00:15:56.722 --rc geninfo_unexecuted_blocks=1 00:15:56.722 00:15:56.722 ' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.722 --rc genhtml_branch_coverage=1 00:15:56.722 --rc genhtml_function_coverage=1 00:15:56.722 --rc genhtml_legend=1 00:15:56.722 --rc geninfo_all_blocks=1 00:15:56.722 --rc geninfo_unexecuted_blocks=1 00:15:56.722 00:15:56.722 ' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.722 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.723 Cannot find device "nvmf_init_br" 00:15:56.723 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.723 Cannot find device "nvmf_init_br2" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.723 Cannot find device "nvmf_tgt_br" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.723 Cannot find device "nvmf_tgt_br2" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.723 Cannot find device "nvmf_init_br" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:56.723 Cannot find device "nvmf_init_br2" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:56.723 Cannot find device "nvmf_tgt_br" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:56.723 Cannot find device "nvmf_tgt_br2" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:56.723 Cannot find device "nvmf_br" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:56.723 Cannot find device "nvmf_init_if" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:56.723 Cannot find device "nvmf_init_if2" 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.723 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:15:56.981 00:15:56.981 --- 10.0.0.3 ping statistics --- 00:15:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.981 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.981 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.981 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:15:56.981 00:15:56.981 --- 10.0.0.4 ping statistics --- 00:15:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.981 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:56.981 00:15:56.981 --- 10.0.0.1 ping statistics --- 00:15:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.981 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:15:56.981 00:15:56.981 --- 10.0.0.2 ping statistics --- 00:15:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.981 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75613 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75613 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75613 ']' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.981 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 [2024-11-26 19:23:55.452168] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:57.239 [2024-11-26 19:23:55.452253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.239 [2024-11-26 19:23:55.604653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.239 [2024-11-26 19:23:55.670424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.239 [2024-11-26 19:23:55.670525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.239 [2024-11-26 19:23:55.670538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.239 [2024-11-26 19:23:55.670548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.239 [2024-11-26 19:23:55.670557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.239 [2024-11-26 19:23:55.671020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.497 [2024-11-26 19:23:55.748494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 [2024-11-26 19:23:55.874436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.497 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 [2024-11-26 19:23:55.882709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 null0 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 null1 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.498 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75632 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75632 /tmp/host.sock 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75632 ']' 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.498 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.756 [2024-11-26 19:23:55.975788] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:15:57.756 [2024-11-26 19:23:55.976078] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75632 ] 00:15:57.756 [2024-11-26 19:23:56.129688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.756 [2024-11-26 19:23:56.191975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.015 [2024-11-26 19:23:56.248924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:58.015 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.275 [2024-11-26 19:23:56.706788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.275 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:58.534 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:59.101 [2024-11-26 19:23:57.337988] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:59.101 [2024-11-26 19:23:57.338010] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:59.101 [2024-11-26 19:23:57.338032] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:59.101 [2024-11-26 19:23:57.344077] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:59.101 [2024-11-26 19:23:57.398604] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:59.101 [2024-11-26 19:23:57.399665] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c1ee60:1 started. 00:15:59.101 [2024-11-26 19:23:57.401500] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:59.102 [2024-11-26 19:23:57.401523] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:59.102 [2024-11-26 19:23:57.406181] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c1ee60 was disconnected and freed. delete nvme_qpair. 00:15:59.667 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:59.668 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:59.668 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.926 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:59.927 [2024-11-26 19:23:58.190415] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c2d2f0:1 started. 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:59.927 [2024-11-26 19:23:58.197433] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c2d2f0 was disconnected and freed. delete nvme_qpair. 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 [2024-11-26 19:23:58.308288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:59.927 [2024-11-26 19:23:58.309507] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:59.927 [2024-11-26 19:23:58.309535] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:59.927 [2024-11-26 19:23:58.315514] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:59.927 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.187 [2024-11-26 19:23:58.380956] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:00.187 [2024-11-26 19:23:58.381008] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:00.187 [2024-11-26 19:23:58.381032] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:00.187 [2024-11-26 19:23:58.381039] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.187 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.188 [2024-11-26 19:23:58.548751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.188 [2024-11-26 19:23:58.548792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.188 [2024-11-26 19:23:58.548823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.188 [2024-11-26 19:23:58.548831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.188 [2024-11-26 19:23:58.548840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.188 [2024-11-26 19:23:58.548848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.188 [2024-11-26 19:23:58.548857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.188 [2024-11-26 19:23:58.548865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.188 [2024-11-26 19:23:58.548873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb240 is same with the state(6) to be set 00:16:00.188 [2024-11-26 19:23:58.548994] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:00.188 [2024-11-26 19:23:58.549032] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:00.188 [2024-11-26 19:23:58.554993] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:00.188 [2024-11-26 19:23:58.555019] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:00.188 [2024-11-26 19:23:58.555074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb240 (9): Bad file descriptor 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.188 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.447 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.448 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.708 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.643 [2024-11-26 19:23:59.960079] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:01.643 [2024-11-26 19:23:59.960108] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:01.643 [2024-11-26 19:23:59.960142] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:01.643 [2024-11-26 19:23:59.966121] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:01.643 [2024-11-26 19:24:00.024451] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:01.643 [2024-11-26 19:24:00.025239] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1c07390:1 started. 00:16:01.643 [2024-11-26 19:24:00.027383] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:01.643 [2024-11-26 19:24:00.027440] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.643 [2024-11-26 19:24:00.028866] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1c07390 was disconnected and freed. delete nvme_qpair. 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.643 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.644 request: 00:16:01.644 { 00:16:01.644 "name": "nvme", 00:16:01.644 "trtype": "tcp", 00:16:01.644 "traddr": "10.0.0.3", 00:16:01.644 "adrfam": "ipv4", 00:16:01.644 "trsvcid": "8009", 00:16:01.644 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:01.644 "wait_for_attach": true, 00:16:01.644 "method": "bdev_nvme_start_discovery", 00:16:01.644 "req_id": 1 00:16:01.644 } 00:16:01.644 Got JSON-RPC error response 00:16:01.644 response: 00:16:01.644 { 00:16:01.644 "code": -17, 00:16:01.644 "message": "File exists" 00:16:01.644 } 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:01.644 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.902 request: 00:16:01.902 { 00:16:01.902 "name": "nvme_second", 00:16:01.902 "trtype": "tcp", 00:16:01.902 "traddr": "10.0.0.3", 00:16:01.902 "adrfam": "ipv4", 00:16:01.902 "trsvcid": "8009", 00:16:01.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:01.902 "wait_for_attach": true, 00:16:01.902 "method": "bdev_nvme_start_discovery", 00:16:01.902 "req_id": 1 00:16:01.902 } 00:16:01.902 Got JSON-RPC error response 00:16:01.902 response: 00:16:01.902 { 00:16:01.902 "code": -17, 00:16:01.902 "message": "File exists" 00:16:01.902 } 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.902 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.903 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.278 [2024-11-26 19:24:01.287751] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:03.278 [2024-11-26 19:24:01.287807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf9f50 with addr=10.0.0.3, port=8010 00:16:03.278 [2024-11-26 19:24:01.287834] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:03.278 [2024-11-26 19:24:01.287845] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:03.278 [2024-11-26 19:24:01.287855] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:04.211 [2024-11-26 19:24:02.287734] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:04.211 [2024-11-26 19:24:02.287817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf9f50 with addr=10.0.0.3, port=8010 00:16:04.211 [2024-11-26 19:24:02.287840] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:04.211 [2024-11-26 19:24:02.287859] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:04.211 [2024-11-26 19:24:02.287867] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:05.147 [2024-11-26 19:24:03.287627] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:05.147 request: 00:16:05.147 { 00:16:05.147 "name": "nvme_second", 00:16:05.147 "trtype": "tcp", 00:16:05.147 "traddr": "10.0.0.3", 00:16:05.147 "adrfam": "ipv4", 00:16:05.147 "trsvcid": "8010", 00:16:05.147 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:05.147 "wait_for_attach": false, 00:16:05.147 "attach_timeout_ms": 3000, 00:16:05.147 "method": "bdev_nvme_start_discovery", 00:16:05.147 "req_id": 1 00:16:05.147 } 00:16:05.147 Got JSON-RPC error response 00:16:05.147 response: 00:16:05.147 { 00:16:05.147 "code": -110, 00:16:05.147 "message": "Connection timed out" 00:16:05.147 } 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75632 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.147 rmmod nvme_tcp 00:16:05.147 rmmod nvme_fabrics 00:16:05.147 rmmod nvme_keyring 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75613 ']' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75613 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75613 ']' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75613 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75613 00:16:05.147 killing process with pid 75613 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75613' 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75613 00:16:05.147 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75613 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:05.405 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:05.718 00:16:05.718 real 0m9.204s 00:16:05.718 user 0m17.230s 00:16:05.718 sys 0m2.062s 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.718 ************************************ 00:16:05.718 END TEST nvmf_host_discovery 00:16:05.718 ************************************ 00:16:05.718 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 ************************************ 00:16:05.718 START TEST nvmf_host_multipath_status 00:16:05.718 ************************************ 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:05.718 * Looking for test storage... 00:16:05.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.718 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.992 --rc genhtml_branch_coverage=1 00:16:05.992 --rc genhtml_function_coverage=1 00:16:05.992 --rc genhtml_legend=1 00:16:05.992 --rc geninfo_all_blocks=1 00:16:05.992 --rc geninfo_unexecuted_blocks=1 00:16:05.992 00:16:05.992 ' 00:16:05.992 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.992 --rc genhtml_branch_coverage=1 00:16:05.992 --rc genhtml_function_coverage=1 00:16:05.992 --rc genhtml_legend=1 00:16:05.992 --rc geninfo_all_blocks=1 00:16:05.992 --rc geninfo_unexecuted_blocks=1 00:16:05.992 00:16:05.992 ' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.993 --rc genhtml_branch_coverage=1 00:16:05.993 --rc genhtml_function_coverage=1 00:16:05.993 --rc genhtml_legend=1 00:16:05.993 --rc geninfo_all_blocks=1 00:16:05.993 --rc geninfo_unexecuted_blocks=1 00:16:05.993 00:16:05.993 ' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.993 --rc genhtml_branch_coverage=1 00:16:05.993 --rc genhtml_function_coverage=1 00:16:05.993 --rc genhtml_legend=1 00:16:05.993 --rc geninfo_all_blocks=1 00:16:05.993 --rc geninfo_unexecuted_blocks=1 00:16:05.993 00:16:05.993 ' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.993 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.994 Cannot find device "nvmf_init_br" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.994 Cannot find device "nvmf_init_br2" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.994 Cannot find device "nvmf_tgt_br" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.994 Cannot find device "nvmf_tgt_br2" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.994 Cannot find device "nvmf_init_br" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.994 Cannot find device "nvmf_init_br2" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.994 Cannot find device "nvmf_tgt_br" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.994 Cannot find device "nvmf_tgt_br2" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.994 Cannot find device "nvmf_br" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.994 Cannot find device "nvmf_init_if" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.994 Cannot find device "nvmf_init_if2" 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.994 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:06.253 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:06.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:06.254 00:16:06.254 --- 10.0.0.3 ping statistics --- 00:16:06.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.254 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:06.254 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:06.254 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:06.254 00:16:06.254 --- 10.0.0.4 ping statistics --- 00:16:06.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.254 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:06.254 00:16:06.254 --- 10.0.0.1 ping statistics --- 00:16:06.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.254 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:06.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:06.254 00:16:06.254 --- 10.0.0.2 ping statistics --- 00:16:06.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.254 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76140 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76140 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76140 ']' 00:16:06.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.254 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:06.513 [2024-11-26 19:24:04.716429] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:16:06.513 [2024-11-26 19:24:04.716676] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.513 [2024-11-26 19:24:04.864489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.513 [2024-11-26 19:24:04.909951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.513 [2024-11-26 19:24:04.910238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.513 [2024-11-26 19:24:04.910418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.513 [2024-11-26 19:24:04.910532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.513 [2024-11-26 19:24:04.910573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.513 [2024-11-26 19:24:04.911805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.513 [2024-11-26 19:24:04.911814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.772 [2024-11-26 19:24:04.965483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76140 00:16:06.772 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:07.030 [2024-11-26 19:24:05.368001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.031 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:07.289 Malloc0 00:16:07.289 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:07.548 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.807 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:08.066 [2024-11-26 19:24:06.389575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:08.066 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:08.325 [2024-11-26 19:24:06.621667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76184 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76184 /var/tmp/bdevperf.sock 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76184 ']' 00:16:08.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.325 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.703 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.703 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:09.703 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:09.703 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:09.962 Nvme0n1 00:16:09.962 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:10.528 Nvme0n1 00:16:10.528 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:10.529 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:12.429 19:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:12.429 19:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:12.688 19:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:12.947 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:13.882 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:13.882 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:13.882 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.882 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:14.141 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.141 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:14.141 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.141 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.399 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.399 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.399 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.400 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.657 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.657 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.657 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.657 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.916 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.916 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.916 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.916 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.174 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.174 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:15.174 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.174 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.433 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.433 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:15.433 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:15.690 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:15.949 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:16.884 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:16.884 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:16.884 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.884 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.142 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.142 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:17.142 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.142 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.402 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.402 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.402 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.402 19:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.661 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.661 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.661 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.661 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.919 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.919 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.919 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.919 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.186 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.186 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.186 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.186 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.478 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.478 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:18.478 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:18.749 19:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:19.007 19:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:19.940 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:19.940 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:19.940 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.940 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.198 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.198 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:20.198 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.198 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.457 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.457 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.457 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.457 19:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.716 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.716 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.716 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.716 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.973 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.973 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.973 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.973 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.231 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.231 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:21.231 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.231 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.490 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.490 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:21.490 19:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:22.057 19:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:22.057 19:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.435 19:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.694 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.694 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.694 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.694 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.953 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.953 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.953 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.953 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.212 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.212 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:24.212 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.212 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.470 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.470 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:24.470 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.470 19:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.730 19:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.730 19:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:24.730 19:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:24.988 19:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:25.247 19:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:26.181 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:26.182 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:26.182 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.182 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:26.748 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.748 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:26.748 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.748 19:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:27.007 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.007 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:27.007 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.007 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.265 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.265 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:27.265 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.265 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.523 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.523 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:27.523 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.523 19:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.781 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.781 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:27.781 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.781 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:28.040 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.040 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:28.040 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:28.298 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:28.555 19:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:29.491 19:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:29.491 19:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:29.491 19:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.491 19:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.749 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.749 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:29.749 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:29.749 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.008 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.008 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:30.008 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:30.008 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.266 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.266 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:30.266 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.266 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.525 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.525 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:30.525 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.525 19:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.784 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.784 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:30.784 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:30.784 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.043 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.043 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:31.301 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:31.301 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:31.559 19:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:31.818 19:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:32.754 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:32.754 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:32.754 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.754 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:33.012 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.012 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:33.012 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.012 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.271 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.271 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.271 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:33.271 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.529 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.529 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.529 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.529 19:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.788 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.788 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:33.788 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.788 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:34.046 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.046 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:34.046 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.046 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.305 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.305 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:34.305 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:34.564 19:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:34.823 19:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:35.759 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:35.759 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:35.759 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.759 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.017 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:36.017 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.017 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.017 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.276 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.276 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.276 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.276 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.857 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.857 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.857 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.857 19:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.857 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.857 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.857 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.857 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.155 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.155 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:37.155 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.155 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.414 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.414 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:37.414 19:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:37.673 19:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:37.931 19:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:38.868 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:38.868 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.868 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.868 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.436 19:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.695 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.695 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.695 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.695 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.261 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.261 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:40.261 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.261 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.519 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.519 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.519 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.519 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.778 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.778 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:40.778 19:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:41.036 19:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:41.294 19:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:42.228 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:42.228 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:42.228 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.228 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:42.488 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.488 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:42.488 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.488 19:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.746 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.746 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.746 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.747 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.005 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.005 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.005 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:43.005 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.262 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.262 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:43.262 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:43.262 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.520 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.520 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:43.520 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:43.520 19:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76184 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76184 ']' 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76184 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.778 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76184 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:44.039 killing process with pid 76184 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76184' 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76184 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76184 00:16:44.039 { 00:16:44.039 "results": [ 00:16:44.039 { 00:16:44.039 "job": "Nvme0n1", 00:16:44.039 "core_mask": "0x4", 00:16:44.039 "workload": "verify", 00:16:44.039 "status": "terminated", 00:16:44.039 "verify_range": { 00:16:44.039 "start": 0, 00:16:44.039 "length": 16384 00:16:44.039 }, 00:16:44.039 "queue_depth": 128, 00:16:44.039 "io_size": 4096, 00:16:44.039 "runtime": 33.415886, 00:16:44.039 "iops": 8121.646093717221, 00:16:44.039 "mibps": 31.725180053582896, 00:16:44.039 "io_failed": 0, 00:16:44.039 "io_timeout": 0, 00:16:44.039 "avg_latency_us": 15730.750138812964, 00:16:44.039 "min_latency_us": 781.9636363636364, 00:16:44.039 "max_latency_us": 4026531.84 00:16:44.039 } 00:16:44.039 ], 00:16:44.039 "core_count": 1 00:16:44.039 } 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76184 00:16:44.039 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:44.039 [2024-11-26 19:24:06.697072] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:16:44.039 [2024-11-26 19:24:06.697170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76184 ] 00:16:44.039 [2024-11-26 19:24:06.850676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.039 [2024-11-26 19:24:06.904850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.039 [2024-11-26 19:24:06.964029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:44.039 Running I/O for 90 seconds... 00:16:44.039 7573.00 IOPS, 29.58 MiB/s [2024-11-26T19:24:42.479Z] 7562.50 IOPS, 29.54 MiB/s [2024-11-26T19:24:42.479Z] 7559.00 IOPS, 29.53 MiB/s [2024-11-26T19:24:42.479Z] 7525.25 IOPS, 29.40 MiB/s [2024-11-26T19:24:42.479Z] 7479.40 IOPS, 29.22 MiB/s [2024-11-26T19:24:42.479Z] 7798.33 IOPS, 30.46 MiB/s [2024-11-26T19:24:42.479Z] 8190.43 IOPS, 31.99 MiB/s [2024-11-26T19:24:42.479Z] 8416.62 IOPS, 32.88 MiB/s [2024-11-26T19:24:42.479Z] 8614.44 IOPS, 33.65 MiB/s [2024-11-26T19:24:42.479Z] 8778.60 IOPS, 34.29 MiB/s [2024-11-26T19:24:42.479Z] 8899.82 IOPS, 34.76 MiB/s [2024-11-26T19:24:42.479Z] 8876.17 IOPS, 34.67 MiB/s [2024-11-26T19:24:42.479Z] 8910.31 IOPS, 34.81 MiB/s [2024-11-26T19:24:42.479Z] 8911.57 IOPS, 34.81 MiB/s [2024-11-26T19:24:42.479Z] [2024-11-26 19:24:23.309352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.039 [2024-11-26 19:24:23.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:44.039 [2024-11-26 19:24:23.309494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.309712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.309974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.309987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.040 [2024-11-26 19:24:23.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:44.040 [2024-11-26 19:24:23.310890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.040 [2024-11-26 19:24:23.310905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.310927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.310966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.310990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.041 [2024-11-26 19:24:23.311871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.311945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.311971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.041 [2024-11-26 19:24:23.312253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:44.041 [2024-11-26 19:24:23.312290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.312530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.312945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.312996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.313012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.313047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.313081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.042 [2024-11-26 19:24:23.313116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.042 [2024-11-26 19:24:23.313428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:44.042 [2024-11-26 19:24:23.313448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.313695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.313957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.313978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.043 [2024-11-26 19:24:23.314301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:44.043 [2024-11-26 19:24:23.314597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.043 [2024-11-26 19:24:23.314610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:44.043 8611.33 IOPS, 33.64 MiB/s [2024-11-26T19:24:42.483Z] 8073.12 IOPS, 31.54 MiB/s [2024-11-26T19:24:42.483Z] 7598.24 IOPS, 29.68 MiB/s [2024-11-26T19:24:42.483Z] 7176.11 IOPS, 28.03 MiB/s [2024-11-26T19:24:42.483Z] 7040.74 IOPS, 27.50 MiB/s [2024-11-26T19:24:42.483Z] 7155.90 IOPS, 27.95 MiB/s [2024-11-26T19:24:42.483Z] 7247.52 IOPS, 28.31 MiB/s [2024-11-26T19:24:42.484Z] 7368.82 IOPS, 28.78 MiB/s [2024-11-26T19:24:42.484Z] 7480.17 IOPS, 29.22 MiB/s [2024-11-26T19:24:42.484Z] 7568.96 IOPS, 29.57 MiB/s [2024-11-26T19:24:42.484Z] 7645.56 IOPS, 29.87 MiB/s [2024-11-26T19:24:42.484Z] 7711.81 IOPS, 30.12 MiB/s [2024-11-26T19:24:42.484Z] 7780.26 IOPS, 30.39 MiB/s [2024-11-26T19:24:42.484Z] 7856.04 IOPS, 30.69 MiB/s [2024-11-26T19:24:42.484Z] 7928.38 IOPS, 30.97 MiB/s [2024-11-26T19:24:42.484Z] 7993.30 IOPS, 31.22 MiB/s [2024-11-26T19:24:42.484Z] [2024-11-26 19:24:39.500941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.501306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.501337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.501400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.501431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.501554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.501567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.503422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.503453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.503484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.503515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.044 [2024-11-26 19:24:39.503630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.044 [2024-11-26 19:24:39.503671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:44.044 [2024-11-26 19:24:39.503693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.045 [2024-11-26 19:24:39.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:44.045 [2024-11-26 19:24:39.503729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.045 [2024-11-26 19:24:39.503744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:44.045 [2024-11-26 19:24:39.503766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.045 [2024-11-26 19:24:39.503780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:44.045 [2024-11-26 19:24:39.503850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.045 [2024-11-26 19:24:39.503886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:44.045 [2024-11-26 19:24:39.503935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.045 [2024-11-26 19:24:39.503974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:44.045 8042.84 IOPS, 31.42 MiB/s [2024-11-26T19:24:42.485Z] 8082.75 IOPS, 31.57 MiB/s [2024-11-26T19:24:42.485Z] 8110.55 IOPS, 31.68 MiB/s [2024-11-26T19:24:42.485Z] Received shutdown signal, test time was about 33.416713 seconds 00:16:44.045 00:16:44.045 Latency(us) 00:16:44.045 [2024-11-26T19:24:42.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.045 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:44.045 Verification LBA range: start 0x0 length 0x4000 00:16:44.045 Nvme0n1 : 33.42 8121.65 31.73 0.00 0.00 15730.75 781.96 4026531.84 00:16:44.045 [2024-11-26T19:24:42.485Z] =================================================================================================================== 00:16:44.045 [2024-11-26T19:24:42.485Z] Total : 8121.65 31.73 0.00 0.00 15730.75 781.96 4026531.84 00:16:44.045 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.303 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.303 rmmod nvme_tcp 00:16:44.303 rmmod nvme_fabrics 00:16:44.303 rmmod nvme_keyring 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76140 ']' 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76140 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76140 ']' 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76140 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76140 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.562 killing process with pid 76140 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76140' 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76140 00:16:44.562 19:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76140 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.821 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:45.079 ************************************ 00:16:45.079 END TEST nvmf_host_multipath_status 00:16:45.079 ************************************ 00:16:45.079 00:16:45.079 real 0m39.244s 00:16:45.079 user 2m6.472s 00:16:45.079 sys 0m11.685s 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.079 19:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.079 ************************************ 00:16:45.079 START TEST nvmf_discovery_remove_ifc 00:16:45.080 ************************************ 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:45.080 * Looking for test storage... 00:16:45.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.080 --rc genhtml_branch_coverage=1 00:16:45.080 --rc genhtml_function_coverage=1 00:16:45.080 --rc genhtml_legend=1 00:16:45.080 --rc geninfo_all_blocks=1 00:16:45.080 --rc geninfo_unexecuted_blocks=1 00:16:45.080 00:16:45.080 ' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.080 --rc genhtml_branch_coverage=1 00:16:45.080 --rc genhtml_function_coverage=1 00:16:45.080 --rc genhtml_legend=1 00:16:45.080 --rc geninfo_all_blocks=1 00:16:45.080 --rc geninfo_unexecuted_blocks=1 00:16:45.080 00:16:45.080 ' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.080 --rc genhtml_branch_coverage=1 00:16:45.080 --rc genhtml_function_coverage=1 00:16:45.080 --rc genhtml_legend=1 00:16:45.080 --rc geninfo_all_blocks=1 00:16:45.080 --rc geninfo_unexecuted_blocks=1 00:16:45.080 00:16:45.080 ' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.080 --rc genhtml_branch_coverage=1 00:16:45.080 --rc genhtml_function_coverage=1 00:16:45.080 --rc genhtml_legend=1 00:16:45.080 --rc geninfo_all_blocks=1 00:16:45.080 --rc geninfo_unexecuted_blocks=1 00:16:45.080 00:16:45.080 ' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.080 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.340 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:45.341 Cannot find device "nvmf_init_br" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:45.341 Cannot find device "nvmf_init_br2" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:45.341 Cannot find device "nvmf_tgt_br" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.341 Cannot find device "nvmf_tgt_br2" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:45.341 Cannot find device "nvmf_init_br" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:45.341 Cannot find device "nvmf_init_br2" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:45.341 Cannot find device "nvmf_tgt_br" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:45.341 Cannot find device "nvmf_tgt_br2" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:45.341 Cannot find device "nvmf_br" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:45.341 Cannot find device "nvmf_init_if" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:45.341 Cannot find device "nvmf_init_if2" 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:45.341 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:45.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:45.600 00:16:45.600 --- 10.0.0.3 ping statistics --- 00:16:45.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.600 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:45.600 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:45.600 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:16:45.600 00:16:45.600 --- 10.0.0.4 ping statistics --- 00:16:45.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.600 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:45.600 00:16:45.600 --- 10.0.0.1 ping statistics --- 00:16:45.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.600 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:45.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:45.600 00:16:45.600 --- 10.0.0.2 ping statistics --- 00:16:45.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.600 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:45.600 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77026 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77026 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77026 ']' 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.601 19:24:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.601 [2024-11-26 19:24:43.992130] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:16:45.601 [2024-11-26 19:24:43.992205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.859 [2024-11-26 19:24:44.140506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.859 [2024-11-26 19:24:44.202007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.859 [2024-11-26 19:24:44.202093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.859 [2024-11-26 19:24:44.202106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.859 [2024-11-26 19:24:44.202115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.859 [2024-11-26 19:24:44.202122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.859 [2024-11-26 19:24:44.202570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.859 [2024-11-26 19:24:44.277646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 [2024-11-26 19:24:44.409698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.119 [2024-11-26 19:24:44.417950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:46.119 null0 00:16:46.119 [2024-11-26 19:24:44.449767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77052 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77052 /tmp/host.sock 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77052 ']' 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.119 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.119 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.119 [2024-11-26 19:24:44.531150] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:16:46.119 [2024-11-26 19:24:44.531243] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77052 ] 00:16:46.378 [2024-11-26 19:24:44.689083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.378 [2024-11-26 19:24:44.741566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.378 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.637 [2024-11-26 19:24:44.863385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.637 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.637 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:46.637 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.637 19:24:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.574 [2024-11-26 19:24:45.923022] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:47.575 [2024-11-26 19:24:45.923046] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:47.575 [2024-11-26 19:24:45.923070] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:47.575 [2024-11-26 19:24:45.929069] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:47.575 [2024-11-26 19:24:45.983396] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:47.575 [2024-11-26 19:24:45.984398] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x106f000:1 started. 00:16:47.575 [2024-11-26 19:24:45.986137] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:47.575 [2024-11-26 19:24:45.986207] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:47.575 [2024-11-26 19:24:45.986233] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:47.575 [2024-11-26 19:24:45.986248] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:47.575 [2024-11-26 19:24:45.986268] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.575 [2024-11-26 19:24:45.991549] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x106f000 was disconnected and freed. delete nvme_qpair. 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.575 19:24:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.835 19:24:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:48.771 19:24:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:50.147 19:24:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.081 19:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.015 19:24:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.949 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.208 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.208 [2024-11-26 19:24:51.414202] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:53.208 [2024-11-26 19:24:51.414275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.208 [2024-11-26 19:24:51.414290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.208 [2024-11-26 19:24:51.414302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.208 [2024-11-26 19:24:51.414311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.208 [2024-11-26 19:24:51.414320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.208 [2024-11-26 19:24:51.414329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.208 [2024-11-26 19:24:51.414338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.208 [2024-11-26 19:24:51.414346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.208 [2024-11-26 19:24:51.414356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.208 [2024-11-26 19:24:51.414364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.208 [2024-11-26 19:24:51.414372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b250 is same with the state(6) to be set 00:16:53.208 [2024-11-26 19:24:51.424208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b250 (9): Bad file descriptor 00:16:53.208 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.208 19:24:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.208 [2024-11-26 19:24:51.434229] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:53.208 [2024-11-26 19:24:51.434255] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:53.208 [2024-11-26 19:24:51.434277] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:53.208 [2024-11-26 19:24:51.434298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:53.208 [2024-11-26 19:24:51.434362] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.142 [2024-11-26 19:24:52.481972] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:54.142 [2024-11-26 19:24:52.482060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x104b250 with addr=10.0.0.3, port=4420 00:16:54.142 [2024-11-26 19:24:52.482080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b250 is same with the state(6) to be set 00:16:54.142 [2024-11-26 19:24:52.482136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b250 (9): Bad file descriptor 00:16:54.142 [2024-11-26 19:24:52.482562] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:54.142 [2024-11-26 19:24:52.482597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:54.142 [2024-11-26 19:24:52.482608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:54.142 [2024-11-26 19:24:52.482619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:54.142 [2024-11-26 19:24:52.482658] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:54.142 [2024-11-26 19:24:52.482665] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:54.142 [2024-11-26 19:24:52.482670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:54.142 [2024-11-26 19:24:52.482680] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:54.142 [2024-11-26 19:24:52.482686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:54.142 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.143 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.143 19:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.077 [2024-11-26 19:24:53.482733] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:55.077 [2024-11-26 19:24:53.482795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:55.077 [2024-11-26 19:24:53.482821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:55.077 [2024-11-26 19:24:53.482846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:55.077 [2024-11-26 19:24:53.482856] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:55.077 [2024-11-26 19:24:53.482865] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:55.077 [2024-11-26 19:24:53.482872] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:55.077 [2024-11-26 19:24:53.482877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:55.077 [2024-11-26 19:24:53.482920] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:55.077 [2024-11-26 19:24:53.482963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.077 [2024-11-26 19:24:53.482984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.077 [2024-11-26 19:24:53.482998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.077 [2024-11-26 19:24:53.483007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.077 [2024-11-26 19:24:53.483016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.077 [2024-11-26 19:24:53.483024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.077 [2024-11-26 19:24:53.483035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.077 [2024-11-26 19:24:53.483059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.078 [2024-11-26 19:24:53.483084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.078 [2024-11-26 19:24:53.483110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.078 [2024-11-26 19:24:53.483125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:55.078 [2024-11-26 19:24:53.483168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd6a20 (9): Bad file descriptor 00:16:55.078 [2024-11-26 19:24:53.484163] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:55.078 [2024-11-26 19:24:53.484205] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:55.078 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:55.336 19:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:56.269 19:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.204 [2024-11-26 19:24:55.488301] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:57.204 [2024-11-26 19:24:55.488356] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:57.204 [2024-11-26 19:24:55.488391] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:57.204 [2024-11-26 19:24:55.494341] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:57.204 [2024-11-26 19:24:55.548691] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:57.204 [2024-11-26 19:24:55.549518] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1056d80:1 started. 00:16:57.204 [2024-11-26 19:24:55.550853] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:57.204 [2024-11-26 19:24:55.550911] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:57.204 [2024-11-26 19:24:55.550935] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:57.204 [2024-11-26 19:24:55.550950] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:57.204 [2024-11-26 19:24:55.550968] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:57.204 [2024-11-26 19:24:55.556398] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1056d80 was disconnected and freed. delete nvme_qpair. 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:57.462 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77052 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77052 ']' 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77052 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77052 00:16:57.463 killing process with pid 77052 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77052' 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77052 00:16:57.463 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77052 00:16:57.721 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:57.721 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.721 19:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.721 rmmod nvme_tcp 00:16:57.721 rmmod nvme_fabrics 00:16:57.721 rmmod nvme_keyring 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77026 ']' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77026 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77026 ']' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77026 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77026 00:16:57.721 killing process with pid 77026 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77026' 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77026 00:16:57.721 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77026 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:57.980 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:58.238 00:16:58.238 real 0m13.271s 00:16:58.238 user 0m22.288s 00:16:58.238 sys 0m2.610s 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.238 ************************************ 00:16:58.238 END TEST nvmf_discovery_remove_ifc 00:16:58.238 ************************************ 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.238 ************************************ 00:16:58.238 START TEST nvmf_identify_kernel_target 00:16:58.238 ************************************ 00:16:58.238 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:58.497 * Looking for test storage... 00:16:58.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.497 --rc genhtml_branch_coverage=1 00:16:58.497 --rc genhtml_function_coverage=1 00:16:58.497 --rc genhtml_legend=1 00:16:58.497 --rc geninfo_all_blocks=1 00:16:58.497 --rc geninfo_unexecuted_blocks=1 00:16:58.497 00:16:58.497 ' 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.497 --rc genhtml_branch_coverage=1 00:16:58.497 --rc genhtml_function_coverage=1 00:16:58.497 --rc genhtml_legend=1 00:16:58.497 --rc geninfo_all_blocks=1 00:16:58.497 --rc geninfo_unexecuted_blocks=1 00:16:58.497 00:16:58.497 ' 00:16:58.497 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.497 --rc genhtml_branch_coverage=1 00:16:58.497 --rc genhtml_function_coverage=1 00:16:58.497 --rc genhtml_legend=1 00:16:58.497 --rc geninfo_all_blocks=1 00:16:58.497 --rc geninfo_unexecuted_blocks=1 00:16:58.497 00:16:58.498 ' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.498 --rc genhtml_branch_coverage=1 00:16:58.498 --rc genhtml_function_coverage=1 00:16:58.498 --rc genhtml_legend=1 00:16:58.498 --rc geninfo_all_blocks=1 00:16:58.498 --rc geninfo_unexecuted_blocks=1 00:16:58.498 00:16:58.498 ' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:58.498 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:58.499 Cannot find device "nvmf_init_br" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:58.499 Cannot find device "nvmf_init_br2" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:58.499 Cannot find device "nvmf_tgt_br" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.499 Cannot find device "nvmf_tgt_br2" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:58.499 Cannot find device "nvmf_init_br" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:58.499 Cannot find device "nvmf_init_br2" 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:58.499 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:58.757 Cannot find device "nvmf_tgt_br" 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:58.757 Cannot find device "nvmf_tgt_br2" 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:58.757 Cannot find device "nvmf_br" 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:58.757 Cannot find device "nvmf_init_if" 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:58.757 Cannot find device "nvmf_init_if2" 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:58.757 19:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:58.757 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:58.758 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:59.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:59.016 00:16:59.016 --- 10.0.0.3 ping statistics --- 00:16:59.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.016 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:59.016 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:59.016 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:16:59.016 00:16:59.016 --- 10.0.0.4 ping statistics --- 00:16:59.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.016 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:59.016 00:16:59.016 --- 10.0.0.1 ping statistics --- 00:16:59.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.016 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:59.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:59.016 00:16:59.016 --- 10.0.0.2 ping statistics --- 00:16:59.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.016 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:59.016 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:59.017 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:59.017 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:59.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:59.275 Waiting for block devices as requested 00:16:59.275 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:59.534 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:59.534 No valid GPT data, bailing 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:59.534 No valid GPT data, bailing 00:16:59.534 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:59.793 19:24:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:59.793 No valid GPT data, bailing 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:59.793 No valid GPT data, bailing 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -a 10.0.0.1 -t tcp -s 4420 00:16:59.793 00:16:59.793 Discovery Log Number of Records 2, Generation counter 2 00:16:59.793 =====Discovery Log Entry 0====== 00:16:59.793 trtype: tcp 00:16:59.793 adrfam: ipv4 00:16:59.793 subtype: current discovery subsystem 00:16:59.793 treq: not specified, sq flow control disable supported 00:16:59.793 portid: 1 00:16:59.793 trsvcid: 4420 00:16:59.793 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:59.793 traddr: 10.0.0.1 00:16:59.793 eflags: none 00:16:59.793 sectype: none 00:16:59.793 =====Discovery Log Entry 1====== 00:16:59.793 trtype: tcp 00:16:59.793 adrfam: ipv4 00:16:59.793 subtype: nvme subsystem 00:16:59.793 treq: not specified, sq flow control disable supported 00:16:59.793 portid: 1 00:16:59.793 trsvcid: 4420 00:16:59.793 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:59.793 traddr: 10.0.0.1 00:16:59.793 eflags: none 00:16:59.793 sectype: none 00:16:59.793 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:59.793 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:00.061 ===================================================== 00:17:00.061 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:00.061 ===================================================== 00:17:00.061 Controller Capabilities/Features 00:17:00.061 ================================ 00:17:00.061 Vendor ID: 0000 00:17:00.061 Subsystem Vendor ID: 0000 00:17:00.061 Serial Number: 95f93032054da7c96b89 00:17:00.061 Model Number: Linux 00:17:00.061 Firmware Version: 6.8.9-20 00:17:00.061 Recommended Arb Burst: 0 00:17:00.061 IEEE OUI Identifier: 00 00 00 00:17:00.061 Multi-path I/O 00:17:00.061 May have multiple subsystem ports: No 00:17:00.061 May have multiple controllers: No 00:17:00.061 Associated with SR-IOV VF: No 00:17:00.061 Max Data Transfer Size: Unlimited 00:17:00.061 Max Number of Namespaces: 0 00:17:00.061 Max Number of I/O Queues: 1024 00:17:00.061 NVMe Specification Version (VS): 1.3 00:17:00.061 NVMe Specification Version (Identify): 1.3 00:17:00.062 Maximum Queue Entries: 1024 00:17:00.062 Contiguous Queues Required: No 00:17:00.062 Arbitration Mechanisms Supported 00:17:00.062 Weighted Round Robin: Not Supported 00:17:00.062 Vendor Specific: Not Supported 00:17:00.062 Reset Timeout: 7500 ms 00:17:00.062 Doorbell Stride: 4 bytes 00:17:00.062 NVM Subsystem Reset: Not Supported 00:17:00.062 Command Sets Supported 00:17:00.062 NVM Command Set: Supported 00:17:00.062 Boot Partition: Not Supported 00:17:00.062 Memory Page Size Minimum: 4096 bytes 00:17:00.062 Memory Page Size Maximum: 4096 bytes 00:17:00.062 Persistent Memory Region: Not Supported 00:17:00.062 Optional Asynchronous Events Supported 00:17:00.062 Namespace Attribute Notices: Not Supported 00:17:00.062 Firmware Activation Notices: Not Supported 00:17:00.062 ANA Change Notices: Not Supported 00:17:00.062 PLE Aggregate Log Change Notices: Not Supported 00:17:00.062 LBA Status Info Alert Notices: Not Supported 00:17:00.062 EGE Aggregate Log Change Notices: Not Supported 00:17:00.062 Normal NVM Subsystem Shutdown event: Not Supported 00:17:00.062 Zone Descriptor Change Notices: Not Supported 00:17:00.062 Discovery Log Change Notices: Supported 00:17:00.062 Controller Attributes 00:17:00.062 128-bit Host Identifier: Not Supported 00:17:00.062 Non-Operational Permissive Mode: Not Supported 00:17:00.062 NVM Sets: Not Supported 00:17:00.062 Read Recovery Levels: Not Supported 00:17:00.062 Endurance Groups: Not Supported 00:17:00.062 Predictable Latency Mode: Not Supported 00:17:00.062 Traffic Based Keep ALive: Not Supported 00:17:00.062 Namespace Granularity: Not Supported 00:17:00.062 SQ Associations: Not Supported 00:17:00.062 UUID List: Not Supported 00:17:00.062 Multi-Domain Subsystem: Not Supported 00:17:00.062 Fixed Capacity Management: Not Supported 00:17:00.062 Variable Capacity Management: Not Supported 00:17:00.062 Delete Endurance Group: Not Supported 00:17:00.062 Delete NVM Set: Not Supported 00:17:00.062 Extended LBA Formats Supported: Not Supported 00:17:00.062 Flexible Data Placement Supported: Not Supported 00:17:00.062 00:17:00.062 Controller Memory Buffer Support 00:17:00.062 ================================ 00:17:00.062 Supported: No 00:17:00.062 00:17:00.062 Persistent Memory Region Support 00:17:00.062 ================================ 00:17:00.062 Supported: No 00:17:00.062 00:17:00.062 Admin Command Set Attributes 00:17:00.062 ============================ 00:17:00.062 Security Send/Receive: Not Supported 00:17:00.062 Format NVM: Not Supported 00:17:00.062 Firmware Activate/Download: Not Supported 00:17:00.062 Namespace Management: Not Supported 00:17:00.062 Device Self-Test: Not Supported 00:17:00.062 Directives: Not Supported 00:17:00.062 NVMe-MI: Not Supported 00:17:00.062 Virtualization Management: Not Supported 00:17:00.062 Doorbell Buffer Config: Not Supported 00:17:00.062 Get LBA Status Capability: Not Supported 00:17:00.062 Command & Feature Lockdown Capability: Not Supported 00:17:00.062 Abort Command Limit: 1 00:17:00.062 Async Event Request Limit: 1 00:17:00.062 Number of Firmware Slots: N/A 00:17:00.062 Firmware Slot 1 Read-Only: N/A 00:17:00.062 Firmware Activation Without Reset: N/A 00:17:00.062 Multiple Update Detection Support: N/A 00:17:00.062 Firmware Update Granularity: No Information Provided 00:17:00.062 Per-Namespace SMART Log: No 00:17:00.062 Asymmetric Namespace Access Log Page: Not Supported 00:17:00.062 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:00.062 Command Effects Log Page: Not Supported 00:17:00.062 Get Log Page Extended Data: Supported 00:17:00.062 Telemetry Log Pages: Not Supported 00:17:00.062 Persistent Event Log Pages: Not Supported 00:17:00.062 Supported Log Pages Log Page: May Support 00:17:00.062 Commands Supported & Effects Log Page: Not Supported 00:17:00.062 Feature Identifiers & Effects Log Page:May Support 00:17:00.062 NVMe-MI Commands & Effects Log Page: May Support 00:17:00.062 Data Area 4 for Telemetry Log: Not Supported 00:17:00.062 Error Log Page Entries Supported: 1 00:17:00.062 Keep Alive: Not Supported 00:17:00.062 00:17:00.062 NVM Command Set Attributes 00:17:00.062 ========================== 00:17:00.062 Submission Queue Entry Size 00:17:00.062 Max: 1 00:17:00.062 Min: 1 00:17:00.062 Completion Queue Entry Size 00:17:00.062 Max: 1 00:17:00.062 Min: 1 00:17:00.062 Number of Namespaces: 0 00:17:00.062 Compare Command: Not Supported 00:17:00.062 Write Uncorrectable Command: Not Supported 00:17:00.062 Dataset Management Command: Not Supported 00:17:00.062 Write Zeroes Command: Not Supported 00:17:00.062 Set Features Save Field: Not Supported 00:17:00.062 Reservations: Not Supported 00:17:00.062 Timestamp: Not Supported 00:17:00.062 Copy: Not Supported 00:17:00.062 Volatile Write Cache: Not Present 00:17:00.062 Atomic Write Unit (Normal): 1 00:17:00.062 Atomic Write Unit (PFail): 1 00:17:00.062 Atomic Compare & Write Unit: 1 00:17:00.062 Fused Compare & Write: Not Supported 00:17:00.062 Scatter-Gather List 00:17:00.062 SGL Command Set: Supported 00:17:00.062 SGL Keyed: Not Supported 00:17:00.062 SGL Bit Bucket Descriptor: Not Supported 00:17:00.062 SGL Metadata Pointer: Not Supported 00:17:00.062 Oversized SGL: Not Supported 00:17:00.062 SGL Metadata Address: Not Supported 00:17:00.062 SGL Offset: Supported 00:17:00.062 Transport SGL Data Block: Not Supported 00:17:00.062 Replay Protected Memory Block: Not Supported 00:17:00.062 00:17:00.062 Firmware Slot Information 00:17:00.062 ========================= 00:17:00.062 Active slot: 0 00:17:00.062 00:17:00.062 00:17:00.062 Error Log 00:17:00.062 ========= 00:17:00.062 00:17:00.062 Active Namespaces 00:17:00.062 ================= 00:17:00.062 Discovery Log Page 00:17:00.062 ================== 00:17:00.062 Generation Counter: 2 00:17:00.062 Number of Records: 2 00:17:00.062 Record Format: 0 00:17:00.062 00:17:00.062 Discovery Log Entry 0 00:17:00.062 ---------------------- 00:17:00.062 Transport Type: 3 (TCP) 00:17:00.062 Address Family: 1 (IPv4) 00:17:00.062 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:00.062 Entry Flags: 00:17:00.062 Duplicate Returned Information: 0 00:17:00.062 Explicit Persistent Connection Support for Discovery: 0 00:17:00.062 Transport Requirements: 00:17:00.062 Secure Channel: Not Specified 00:17:00.062 Port ID: 1 (0x0001) 00:17:00.062 Controller ID: 65535 (0xffff) 00:17:00.062 Admin Max SQ Size: 32 00:17:00.062 Transport Service Identifier: 4420 00:17:00.062 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:00.062 Transport Address: 10.0.0.1 00:17:00.062 Discovery Log Entry 1 00:17:00.063 ---------------------- 00:17:00.063 Transport Type: 3 (TCP) 00:17:00.063 Address Family: 1 (IPv4) 00:17:00.063 Subsystem Type: 2 (NVM Subsystem) 00:17:00.063 Entry Flags: 00:17:00.063 Duplicate Returned Information: 0 00:17:00.063 Explicit Persistent Connection Support for Discovery: 0 00:17:00.063 Transport Requirements: 00:17:00.063 Secure Channel: Not Specified 00:17:00.063 Port ID: 1 (0x0001) 00:17:00.063 Controller ID: 65535 (0xffff) 00:17:00.063 Admin Max SQ Size: 32 00:17:00.063 Transport Service Identifier: 4420 00:17:00.063 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:00.063 Transport Address: 10.0.0.1 00:17:00.063 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:00.347 get_feature(0x01) failed 00:17:00.347 get_feature(0x02) failed 00:17:00.347 get_feature(0x04) failed 00:17:00.347 ===================================================== 00:17:00.347 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:00.347 ===================================================== 00:17:00.347 Controller Capabilities/Features 00:17:00.347 ================================ 00:17:00.347 Vendor ID: 0000 00:17:00.347 Subsystem Vendor ID: 0000 00:17:00.347 Serial Number: 0dd17fb88ee19895a259 00:17:00.347 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:00.347 Firmware Version: 6.8.9-20 00:17:00.347 Recommended Arb Burst: 6 00:17:00.347 IEEE OUI Identifier: 00 00 00 00:17:00.347 Multi-path I/O 00:17:00.347 May have multiple subsystem ports: Yes 00:17:00.347 May have multiple controllers: Yes 00:17:00.347 Associated with SR-IOV VF: No 00:17:00.347 Max Data Transfer Size: Unlimited 00:17:00.347 Max Number of Namespaces: 1024 00:17:00.347 Max Number of I/O Queues: 128 00:17:00.347 NVMe Specification Version (VS): 1.3 00:17:00.347 NVMe Specification Version (Identify): 1.3 00:17:00.347 Maximum Queue Entries: 1024 00:17:00.347 Contiguous Queues Required: No 00:17:00.347 Arbitration Mechanisms Supported 00:17:00.347 Weighted Round Robin: Not Supported 00:17:00.347 Vendor Specific: Not Supported 00:17:00.347 Reset Timeout: 7500 ms 00:17:00.347 Doorbell Stride: 4 bytes 00:17:00.347 NVM Subsystem Reset: Not Supported 00:17:00.347 Command Sets Supported 00:17:00.347 NVM Command Set: Supported 00:17:00.347 Boot Partition: Not Supported 00:17:00.347 Memory Page Size Minimum: 4096 bytes 00:17:00.347 Memory Page Size Maximum: 4096 bytes 00:17:00.347 Persistent Memory Region: Not Supported 00:17:00.347 Optional Asynchronous Events Supported 00:17:00.347 Namespace Attribute Notices: Supported 00:17:00.347 Firmware Activation Notices: Not Supported 00:17:00.347 ANA Change Notices: Supported 00:17:00.347 PLE Aggregate Log Change Notices: Not Supported 00:17:00.347 LBA Status Info Alert Notices: Not Supported 00:17:00.347 EGE Aggregate Log Change Notices: Not Supported 00:17:00.347 Normal NVM Subsystem Shutdown event: Not Supported 00:17:00.347 Zone Descriptor Change Notices: Not Supported 00:17:00.347 Discovery Log Change Notices: Not Supported 00:17:00.347 Controller Attributes 00:17:00.347 128-bit Host Identifier: Supported 00:17:00.347 Non-Operational Permissive Mode: Not Supported 00:17:00.347 NVM Sets: Not Supported 00:17:00.347 Read Recovery Levels: Not Supported 00:17:00.347 Endurance Groups: Not Supported 00:17:00.347 Predictable Latency Mode: Not Supported 00:17:00.347 Traffic Based Keep ALive: Supported 00:17:00.347 Namespace Granularity: Not Supported 00:17:00.347 SQ Associations: Not Supported 00:17:00.347 UUID List: Not Supported 00:17:00.347 Multi-Domain Subsystem: Not Supported 00:17:00.347 Fixed Capacity Management: Not Supported 00:17:00.347 Variable Capacity Management: Not Supported 00:17:00.347 Delete Endurance Group: Not Supported 00:17:00.347 Delete NVM Set: Not Supported 00:17:00.347 Extended LBA Formats Supported: Not Supported 00:17:00.347 Flexible Data Placement Supported: Not Supported 00:17:00.347 00:17:00.347 Controller Memory Buffer Support 00:17:00.347 ================================ 00:17:00.347 Supported: No 00:17:00.347 00:17:00.347 Persistent Memory Region Support 00:17:00.347 ================================ 00:17:00.347 Supported: No 00:17:00.347 00:17:00.347 Admin Command Set Attributes 00:17:00.347 ============================ 00:17:00.347 Security Send/Receive: Not Supported 00:17:00.347 Format NVM: Not Supported 00:17:00.347 Firmware Activate/Download: Not Supported 00:17:00.347 Namespace Management: Not Supported 00:17:00.347 Device Self-Test: Not Supported 00:17:00.347 Directives: Not Supported 00:17:00.348 NVMe-MI: Not Supported 00:17:00.348 Virtualization Management: Not Supported 00:17:00.348 Doorbell Buffer Config: Not Supported 00:17:00.348 Get LBA Status Capability: Not Supported 00:17:00.348 Command & Feature Lockdown Capability: Not Supported 00:17:00.348 Abort Command Limit: 4 00:17:00.348 Async Event Request Limit: 4 00:17:00.348 Number of Firmware Slots: N/A 00:17:00.348 Firmware Slot 1 Read-Only: N/A 00:17:00.348 Firmware Activation Without Reset: N/A 00:17:00.348 Multiple Update Detection Support: N/A 00:17:00.348 Firmware Update Granularity: No Information Provided 00:17:00.348 Per-Namespace SMART Log: Yes 00:17:00.348 Asymmetric Namespace Access Log Page: Supported 00:17:00.348 ANA Transition Time : 10 sec 00:17:00.348 00:17:00.348 Asymmetric Namespace Access Capabilities 00:17:00.348 ANA Optimized State : Supported 00:17:00.348 ANA Non-Optimized State : Supported 00:17:00.348 ANA Inaccessible State : Supported 00:17:00.348 ANA Persistent Loss State : Supported 00:17:00.348 ANA Change State : Supported 00:17:00.348 ANAGRPID is not changed : No 00:17:00.348 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:00.348 00:17:00.348 ANA Group Identifier Maximum : 128 00:17:00.348 Number of ANA Group Identifiers : 128 00:17:00.348 Max Number of Allowed Namespaces : 1024 00:17:00.348 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:00.348 Command Effects Log Page: Supported 00:17:00.348 Get Log Page Extended Data: Supported 00:17:00.348 Telemetry Log Pages: Not Supported 00:17:00.348 Persistent Event Log Pages: Not Supported 00:17:00.348 Supported Log Pages Log Page: May Support 00:17:00.348 Commands Supported & Effects Log Page: Not Supported 00:17:00.348 Feature Identifiers & Effects Log Page:May Support 00:17:00.348 NVMe-MI Commands & Effects Log Page: May Support 00:17:00.348 Data Area 4 for Telemetry Log: Not Supported 00:17:00.348 Error Log Page Entries Supported: 128 00:17:00.348 Keep Alive: Supported 00:17:00.348 Keep Alive Granularity: 1000 ms 00:17:00.348 00:17:00.348 NVM Command Set Attributes 00:17:00.348 ========================== 00:17:00.348 Submission Queue Entry Size 00:17:00.348 Max: 64 00:17:00.348 Min: 64 00:17:00.348 Completion Queue Entry Size 00:17:00.348 Max: 16 00:17:00.348 Min: 16 00:17:00.348 Number of Namespaces: 1024 00:17:00.348 Compare Command: Not Supported 00:17:00.348 Write Uncorrectable Command: Not Supported 00:17:00.348 Dataset Management Command: Supported 00:17:00.348 Write Zeroes Command: Supported 00:17:00.348 Set Features Save Field: Not Supported 00:17:00.348 Reservations: Not Supported 00:17:00.348 Timestamp: Not Supported 00:17:00.348 Copy: Not Supported 00:17:00.348 Volatile Write Cache: Present 00:17:00.348 Atomic Write Unit (Normal): 1 00:17:00.348 Atomic Write Unit (PFail): 1 00:17:00.348 Atomic Compare & Write Unit: 1 00:17:00.348 Fused Compare & Write: Not Supported 00:17:00.348 Scatter-Gather List 00:17:00.348 SGL Command Set: Supported 00:17:00.348 SGL Keyed: Not Supported 00:17:00.348 SGL Bit Bucket Descriptor: Not Supported 00:17:00.348 SGL Metadata Pointer: Not Supported 00:17:00.348 Oversized SGL: Not Supported 00:17:00.348 SGL Metadata Address: Not Supported 00:17:00.348 SGL Offset: Supported 00:17:00.348 Transport SGL Data Block: Not Supported 00:17:00.348 Replay Protected Memory Block: Not Supported 00:17:00.348 00:17:00.348 Firmware Slot Information 00:17:00.348 ========================= 00:17:00.348 Active slot: 0 00:17:00.348 00:17:00.348 Asymmetric Namespace Access 00:17:00.348 =========================== 00:17:00.348 Change Count : 0 00:17:00.348 Number of ANA Group Descriptors : 1 00:17:00.348 ANA Group Descriptor : 0 00:17:00.348 ANA Group ID : 1 00:17:00.348 Number of NSID Values : 1 00:17:00.348 Change Count : 0 00:17:00.348 ANA State : 1 00:17:00.348 Namespace Identifier : 1 00:17:00.348 00:17:00.348 Commands Supported and Effects 00:17:00.348 ============================== 00:17:00.348 Admin Commands 00:17:00.348 -------------- 00:17:00.348 Get Log Page (02h): Supported 00:17:00.348 Identify (06h): Supported 00:17:00.348 Abort (08h): Supported 00:17:00.348 Set Features (09h): Supported 00:17:00.348 Get Features (0Ah): Supported 00:17:00.348 Asynchronous Event Request (0Ch): Supported 00:17:00.348 Keep Alive (18h): Supported 00:17:00.348 I/O Commands 00:17:00.348 ------------ 00:17:00.348 Flush (00h): Supported 00:17:00.348 Write (01h): Supported LBA-Change 00:17:00.348 Read (02h): Supported 00:17:00.348 Write Zeroes (08h): Supported LBA-Change 00:17:00.348 Dataset Management (09h): Supported 00:17:00.348 00:17:00.348 Error Log 00:17:00.348 ========= 00:17:00.348 Entry: 0 00:17:00.348 Error Count: 0x3 00:17:00.348 Submission Queue Id: 0x0 00:17:00.348 Command Id: 0x5 00:17:00.348 Phase Bit: 0 00:17:00.348 Status Code: 0x2 00:17:00.348 Status Code Type: 0x0 00:17:00.348 Do Not Retry: 1 00:17:00.348 Error Location: 0x28 00:17:00.348 LBA: 0x0 00:17:00.348 Namespace: 0x0 00:17:00.348 Vendor Log Page: 0x0 00:17:00.348 ----------- 00:17:00.348 Entry: 1 00:17:00.348 Error Count: 0x2 00:17:00.348 Submission Queue Id: 0x0 00:17:00.348 Command Id: 0x5 00:17:00.348 Phase Bit: 0 00:17:00.348 Status Code: 0x2 00:17:00.348 Status Code Type: 0x0 00:17:00.348 Do Not Retry: 1 00:17:00.348 Error Location: 0x28 00:17:00.348 LBA: 0x0 00:17:00.348 Namespace: 0x0 00:17:00.348 Vendor Log Page: 0x0 00:17:00.348 ----------- 00:17:00.348 Entry: 2 00:17:00.348 Error Count: 0x1 00:17:00.348 Submission Queue Id: 0x0 00:17:00.348 Command Id: 0x4 00:17:00.348 Phase Bit: 0 00:17:00.348 Status Code: 0x2 00:17:00.348 Status Code Type: 0x0 00:17:00.348 Do Not Retry: 1 00:17:00.348 Error Location: 0x28 00:17:00.348 LBA: 0x0 00:17:00.348 Namespace: 0x0 00:17:00.348 Vendor Log Page: 0x0 00:17:00.348 00:17:00.348 Number of Queues 00:17:00.348 ================ 00:17:00.348 Number of I/O Submission Queues: 128 00:17:00.348 Number of I/O Completion Queues: 128 00:17:00.348 00:17:00.348 ZNS Specific Controller Data 00:17:00.348 ============================ 00:17:00.348 Zone Append Size Limit: 0 00:17:00.348 00:17:00.348 00:17:00.348 Active Namespaces 00:17:00.348 ================= 00:17:00.348 get_feature(0x05) failed 00:17:00.348 Namespace ID:1 00:17:00.348 Command Set Identifier: NVM (00h) 00:17:00.348 Deallocate: Supported 00:17:00.348 Deallocated/Unwritten Error: Not Supported 00:17:00.348 Deallocated Read Value: Unknown 00:17:00.348 Deallocate in Write Zeroes: Not Supported 00:17:00.348 Deallocated Guard Field: 0xFFFF 00:17:00.348 Flush: Supported 00:17:00.348 Reservation: Not Supported 00:17:00.348 Namespace Sharing Capabilities: Multiple Controllers 00:17:00.348 Size (in LBAs): 1310720 (5GiB) 00:17:00.348 Capacity (in LBAs): 1310720 (5GiB) 00:17:00.348 Utilization (in LBAs): 1310720 (5GiB) 00:17:00.348 UUID: 506efee0-f94c-4a05-b48d-9c0ca369718b 00:17:00.348 Thin Provisioning: Not Supported 00:17:00.348 Per-NS Atomic Units: Yes 00:17:00.348 Atomic Boundary Size (Normal): 0 00:17:00.348 Atomic Boundary Size (PFail): 0 00:17:00.348 Atomic Boundary Offset: 0 00:17:00.348 NGUID/EUI64 Never Reused: No 00:17:00.348 ANA group ID: 1 00:17:00.348 Namespace Write Protected: No 00:17:00.348 Number of LBA Formats: 1 00:17:00.348 Current LBA Format: LBA Format #00 00:17:00.348 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:00.348 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.348 rmmod nvme_tcp 00:17:00.348 rmmod nvme_fabrics 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.348 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:00.349 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:00.610 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:00.610 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:00.610 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.610 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:00.611 19:24:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.435 00:17:01.435 real 0m3.113s 00:17:01.435 user 0m1.114s 00:17:01.435 sys 0m1.417s 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.435 ************************************ 00:17:01.435 END TEST nvmf_identify_kernel_target 00:17:01.435 ************************************ 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.435 ************************************ 00:17:01.435 START TEST nvmf_auth_host 00:17:01.435 ************************************ 00:17:01.435 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:01.694 * Looking for test storage... 00:17:01.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.694 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.694 --rc genhtml_branch_coverage=1 00:17:01.694 --rc genhtml_function_coverage=1 00:17:01.694 --rc genhtml_legend=1 00:17:01.694 --rc geninfo_all_blocks=1 00:17:01.694 --rc geninfo_unexecuted_blocks=1 00:17:01.694 00:17:01.694 ' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.694 --rc genhtml_branch_coverage=1 00:17:01.694 --rc genhtml_function_coverage=1 00:17:01.694 --rc genhtml_legend=1 00:17:01.694 --rc geninfo_all_blocks=1 00:17:01.694 --rc geninfo_unexecuted_blocks=1 00:17:01.694 00:17:01.694 ' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.694 --rc genhtml_branch_coverage=1 00:17:01.694 --rc genhtml_function_coverage=1 00:17:01.694 --rc genhtml_legend=1 00:17:01.694 --rc geninfo_all_blocks=1 00:17:01.694 --rc geninfo_unexecuted_blocks=1 00:17:01.694 00:17:01.694 ' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.694 --rc genhtml_branch_coverage=1 00:17:01.694 --rc genhtml_function_coverage=1 00:17:01.694 --rc genhtml_legend=1 00:17:01.694 --rc geninfo_all_blocks=1 00:17:01.694 --rc geninfo_unexecuted_blocks=1 00:17:01.694 00:17:01.694 ' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.694 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.695 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.695 Cannot find device "nvmf_init_br" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.695 Cannot find device "nvmf_init_br2" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.695 Cannot find device "nvmf_tgt_br" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.695 Cannot find device "nvmf_tgt_br2" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.695 Cannot find device "nvmf_init_br" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.695 Cannot find device "nvmf_init_br2" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.695 Cannot find device "nvmf_tgt_br" 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:01.695 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.695 Cannot find device "nvmf_tgt_br2" 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.954 Cannot find device "nvmf_br" 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.954 Cannot find device "nvmf_init_if" 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.954 Cannot find device "nvmf_init_if2" 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:01.954 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:01.955 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:02.213 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:02.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:17:02.213 00:17:02.213 --- 10.0.0.3 ping statistics --- 00:17:02.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.213 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:02.214 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:02.214 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:02.214 00:17:02.214 --- 10.0.0.4 ping statistics --- 00:17:02.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.214 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:02.214 00:17:02.214 --- 10.0.0.1 ping statistics --- 00:17:02.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.214 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:02.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:02.214 00:17:02.214 --- 10.0.0.2 ping statistics --- 00:17:02.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.214 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78033 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78033 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78033 ']' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.214 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.472 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d57945e09719dde8856da0898e29f29 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UHm 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d57945e09719dde8856da0898e29f29 0 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d57945e09719dde8856da0898e29f29 0 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d57945e09719dde8856da0898e29f29 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UHm 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UHm 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.UHm 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=148611b370d5f25b930365eed16a29e87827e2b87b077ee83d3dcea64adc8558 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.h5M 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 148611b370d5f25b930365eed16a29e87827e2b87b077ee83d3dcea64adc8558 3 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 148611b370d5f25b930365eed16a29e87827e2b87b077ee83d3dcea64adc8558 3 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=148611b370d5f25b930365eed16a29e87827e2b87b077ee83d3dcea64adc8558 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:02.732 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.h5M 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.h5M 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.h5M 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=259d420310bff251c41aa2ccf844084adc7b8ce8797f86be 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jCY 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 259d420310bff251c41aa2ccf844084adc7b8ce8797f86be 0 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 259d420310bff251c41aa2ccf844084adc7b8ce8797f86be 0 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=259d420310bff251c41aa2ccf844084adc7b8ce8797f86be 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jCY 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jCY 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jCY 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fad601511f4ccbb8fabbf32cb04cd9e1e5227d256633f524 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LVW 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fad601511f4ccbb8fabbf32cb04cd9e1e5227d256633f524 2 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fad601511f4ccbb8fabbf32cb04cd9e1e5227d256633f524 2 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fad601511f4ccbb8fabbf32cb04cd9e1e5227d256633f524 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LVW 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LVW 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LVW 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.732 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b2e0b6dfb5b33a117866b4f7e19a54c1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ay1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b2e0b6dfb5b33a117866b4f7e19a54c1 1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b2e0b6dfb5b33a117866b4f7e19a54c1 1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b2e0b6dfb5b33a117866b4f7e19a54c1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ay1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ay1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ay1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16695898eedba90013e5476185406da6 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fta 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16695898eedba90013e5476185406da6 1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16695898eedba90013e5476185406da6 1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16695898eedba90013e5476185406da6 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fta 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fta 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Fta 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.991 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=39b34cf1e1d276ac086f9f26ab2f4cf81808cdb5e1f40752 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SWP 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 39b34cf1e1d276ac086f9f26ab2f4cf81808cdb5e1f40752 2 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 39b34cf1e1d276ac086f9f26ab2f4cf81808cdb5e1f40752 2 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=39b34cf1e1d276ac086f9f26ab2f4cf81808cdb5e1f40752 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SWP 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SWP 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SWP 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=feef82edb34cc59b4bfe30d99df4dd01 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2ZS 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key feef82edb34cc59b4bfe30d99df4dd01 0 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 feef82edb34cc59b4bfe30d99df4dd01 0 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=feef82edb34cc59b4bfe30d99df4dd01 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.992 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2ZS 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2ZS 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2ZS 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=508827ddc9517cbdb16d57ee46fd1f4bd4e5af13a4a0c7e5177b4eb713227b43 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.43y 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 508827ddc9517cbdb16d57ee46fd1f4bd4e5af13a4a0c7e5177b4eb713227b43 3 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 508827ddc9517cbdb16d57ee46fd1f4bd4e5af13a4a0c7e5177b4eb713227b43 3 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=508827ddc9517cbdb16d57ee46fd1f4bd4e5af13a4a0c7e5177b4eb713227b43 00:17:03.250 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.43y 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.43y 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.43y 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78033 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78033 ']' 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.251 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UHm 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.h5M ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.h5M 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jCY 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LVW ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LVW 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ay1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Fta ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fta 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SWP 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2ZS ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2ZS 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.43y 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:03.510 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:04.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.077 Waiting for block devices as requested 00:17:04.077 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:04.645 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:04.645 No valid GPT data, bailing 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:04.645 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:04.903 No valid GPT data, bailing 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:04.903 No valid GPT data, bailing 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:04.903 No valid GPT data, bailing 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:04.903 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -a 10.0.0.1 -t tcp -s 4420 00:17:04.904 00:17:04.904 Discovery Log Number of Records 2, Generation counter 2 00:17:04.904 =====Discovery Log Entry 0====== 00:17:04.904 trtype: tcp 00:17:04.904 adrfam: ipv4 00:17:04.904 subtype: current discovery subsystem 00:17:04.904 treq: not specified, sq flow control disable supported 00:17:04.904 portid: 1 00:17:04.904 trsvcid: 4420 00:17:04.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:04.904 traddr: 10.0.0.1 00:17:04.904 eflags: none 00:17:04.904 sectype: none 00:17:04.904 =====Discovery Log Entry 1====== 00:17:04.904 trtype: tcp 00:17:04.904 adrfam: ipv4 00:17:04.904 subtype: nvme subsystem 00:17:04.904 treq: not specified, sq flow control disable supported 00:17:04.904 portid: 1 00:17:04.904 trsvcid: 4420 00:17:04.904 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:04.904 traddr: 10.0.0.1 00:17:04.904 eflags: none 00:17:04.904 sectype: none 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.904 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.162 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.163 nvme0n1 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.163 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.421 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.422 nvme0n1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.422 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.679 nvme0n1 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.680 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 nvme0n1 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.938 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.939 nvme0n1 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.939 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.197 nvme0n1 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.197 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.456 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.714 nvme0n1 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.714 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:06.714 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.715 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 nvme0n1 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:06.974 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.975 nvme0n1 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.975 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.234 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 nvme0n1 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.235 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.494 nvme0n1 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.494 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.063 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.322 nvme0n1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.322 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.581 nvme0n1 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.581 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.840 nvme0n1 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.100 nvme0n1 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 nvme0n1 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.358 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.359 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.359 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:09.359 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:09.359 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.359 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 nvme0n1 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.262 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.263 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.521 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.780 nvme0n1 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.780 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.054 nvme0n1 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.054 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.313 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.572 nvme0n1 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.573 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.831 nvme0n1 00:17:12.831 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.831 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.831 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.831 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.831 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.090 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.658 nvme0n1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.658 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 nvme0n1 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.224 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.867 nvme0n1 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.867 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.435 nvme0n1 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.435 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.003 nvme0n1 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.003 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.262 nvme0n1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.262 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.263 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 nvme0n1 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 nvme0n1 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.522 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.523 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.781 nvme0n1 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.781 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.782 nvme0n1 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.782 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.040 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 nvme0n1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.041 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.300 nvme0n1 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.300 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.560 nvme0n1 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.560 nvme0n1 00:17:17.560 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.819 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.819 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.819 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.819 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.819 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.819 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.819 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.819 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.820 nvme0n1 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.820 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 nvme0n1 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.078 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.079 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.079 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.079 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.337 nvme0n1 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.337 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.338 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.338 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.338 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.338 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:18.596 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.597 nvme0n1 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.597 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.597 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.597 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.597 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.597 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.856 nvme0n1 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.856 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 nvme0n1 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.374 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.633 nvme0n1 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.633 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.634 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.893 nvme0n1 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.893 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.152 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 nvme0n1 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.411 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.669 nvme0n1 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.669 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.935 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.936 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.196 nvme0n1 00:17:21.196 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.196 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.764 nvme0n1 00:17:21.764 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.764 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.765 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.333 nvme0n1 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:22.333 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.334 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 nvme0n1 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.902 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.229 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.487 nvme0n1 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.487 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.745 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 nvme0n1 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.309 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.566 nvme0n1 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.566 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.567 nvme0n1 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.567 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.567 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.567 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.825 nvme0n1 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.825 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 nvme0n1 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 nvme0n1 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.084 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.343 nvme0n1 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.343 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.344 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.603 nvme0n1 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.603 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 nvme0n1 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 nvme0n1 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.122 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 nvme0n1 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.123 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.382 nvme0n1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.382 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.641 nvme0n1 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.641 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.641 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.642 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.642 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.901 nvme0n1 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.901 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.902 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.902 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.902 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.902 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.902 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 nvme0n1 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.160 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.419 nvme0n1 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.419 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.985 nvme0n1 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.985 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.986 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.243 nvme0n1 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:28.243 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.244 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.501 nvme0n1 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.501 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:28.759 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.760 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.019 nvme0n1 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.019 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.278 nvme0n1 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.278 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmQ1Nzk0NWUwOTcxOWRkZTg4NTZkYTA4OThlMjlmMjklbs7v: 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ4NjExYjM3MGQ1ZjI1YjkzMDM2NWVlZDE2YTI5ZTg3ODI3ZTJiODdiMDc3ZWU4M2QzZGNlYTY0YWRjODU1OF8Ot24=: 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.537 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 nvme0n1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.104 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.671 nvme0n1 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:30.671 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.672 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.244 nvme0n1 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzliMzRjZjFlMWQyNzZhYzA4NmY5ZjI2YWIyZjRjZjgxODA4Y2RiNWUxZjQwNzUyWWFybw==: 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVlZjgyZWRiMzRjYzU5YjRiZmUzMGQ5OWRmNGRkMDEjJ0Fb: 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.244 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.810 nvme0n1 00:17:31.810 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.810 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.810 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.810 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.810 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTA4ODI3ZGRjOTUxN2NiZGIxNmQ1N2VlNDZmZDFmNGJkNGU1YWYxM2E0YTBjN2U1MTc3YjRlYjcxMzIyN2I0M1BlgOA=: 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.811 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 nvme0n1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 request: 00:17:32.377 { 00:17:32.377 "name": "nvme0", 00:17:32.377 "trtype": "tcp", 00:17:32.377 "traddr": "10.0.0.1", 00:17:32.377 "adrfam": "ipv4", 00:17:32.377 "trsvcid": "4420", 00:17:32.377 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:32.377 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:32.377 "prchk_reftag": false, 00:17:32.377 "prchk_guard": false, 00:17:32.377 "hdgst": false, 00:17:32.377 "ddgst": false, 00:17:32.377 "allow_unrecognized_csi": false, 00:17:32.377 "method": "bdev_nvme_attach_controller", 00:17:32.377 "req_id": 1 00:17:32.377 } 00:17:32.377 Got JSON-RPC error response 00:17:32.377 response: 00:17:32.377 { 00:17:32.377 "code": -5, 00:17:32.377 "message": "Input/output error" 00:17:32.377 } 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 request: 00:17:32.637 { 00:17:32.637 "name": "nvme0", 00:17:32.637 "trtype": "tcp", 00:17:32.637 "traddr": "10.0.0.1", 00:17:32.637 "adrfam": "ipv4", 00:17:32.637 "trsvcid": "4420", 00:17:32.637 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:32.637 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:32.637 "prchk_reftag": false, 00:17:32.637 "prchk_guard": false, 00:17:32.637 "hdgst": false, 00:17:32.637 "ddgst": false, 00:17:32.637 "dhchap_key": "key2", 00:17:32.637 "allow_unrecognized_csi": false, 00:17:32.637 "method": "bdev_nvme_attach_controller", 00:17:32.637 "req_id": 1 00:17:32.637 } 00:17:32.637 Got JSON-RPC error response 00:17:32.637 response: 00:17:32.637 { 00:17:32.637 "code": -5, 00:17:32.637 "message": "Input/output error" 00:17:32.637 } 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 request: 00:17:32.637 { 00:17:32.637 "name": "nvme0", 00:17:32.637 "trtype": "tcp", 00:17:32.637 "traddr": "10.0.0.1", 00:17:32.637 "adrfam": "ipv4", 00:17:32.637 "trsvcid": "4420", 00:17:32.637 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:32.637 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:32.637 "prchk_reftag": false, 00:17:32.637 "prchk_guard": false, 00:17:32.637 "hdgst": false, 00:17:32.637 "ddgst": false, 00:17:32.637 "dhchap_key": "key1", 00:17:32.637 "dhchap_ctrlr_key": "ckey2", 00:17:32.637 "allow_unrecognized_csi": false, 00:17:32.637 "method": "bdev_nvme_attach_controller", 00:17:32.637 "req_id": 1 00:17:32.637 } 00:17:32.637 Got JSON-RPC error response 00:17:32.637 response: 00:17:32.637 { 00:17:32.637 "code": -5, 00:17:32.637 "message": "Input/output error" 00:17:32.637 } 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.637 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 nvme0n1 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.637 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.896 request: 00:17:32.896 { 00:17:32.896 "name": "nvme0", 00:17:32.896 "dhchap_key": "key1", 00:17:32.896 "dhchap_ctrlr_key": "ckey2", 00:17:32.896 "method": "bdev_nvme_set_keys", 00:17:32.896 "req_id": 1 00:17:32.896 } 00:17:32.896 Got JSON-RPC error response 00:17:32.896 response: 00:17:32.896 { 00:17:32.896 "code": -13, 00:17:32.896 "message": "Permission denied" 00:17:32.896 } 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:32.896 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU5ZDQyMDMxMGJmZjI1MWM0MWFhMmNjZjg0NDA4NGFkYzdiOGNlODc5N2Y4NmJld1w4wQ==: 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: ]] 00:17:33.831 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFkNjAxNTExZjRjY2JiOGZhYmJmMzJjYjA0Y2Q5ZTFlNTIyN2QyNTY2MzNmNTI0bPAYkQ==: 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.090 nvme0n1 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJlMGI2ZGZiNWIzM2ExMTc4NjZiNGY3ZTE5YTU0YzGjmWq7: 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: ]] 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY2OTU4OThlZWRiYTkwMDEzZTU0NzYxODU0MDZkYTa3FmVj: 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:34.090 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.091 request: 00:17:34.091 { 00:17:34.091 "name": "nvme0", 00:17:34.091 "dhchap_key": "key2", 00:17:34.091 "dhchap_ctrlr_key": "ckey1", 00:17:34.091 "method": "bdev_nvme_set_keys", 00:17:34.091 "req_id": 1 00:17:34.091 } 00:17:34.091 Got JSON-RPC error response 00:17:34.091 response: 00:17:34.091 { 00:17:34.091 "code": -13, 00:17:34.091 "message": "Permission denied" 00:17:34.091 } 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:34.091 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:35.026 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.026 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:35.026 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.026 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.285 rmmod nvme_tcp 00:17:35.285 rmmod nvme_fabrics 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78033 ']' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78033 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78033 ']' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78033 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78033 00:17:35.285 killing process with pid 78033 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78033' 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78033 00:17:35.285 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78033 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:35.543 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:35.801 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:35.801 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:36.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.627 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.627 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.UHm /tmp/spdk.key-null.jCY /tmp/spdk.key-sha256.Ay1 /tmp/spdk.key-sha384.SWP /tmp/spdk.key-sha512.43y /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:36.627 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:36.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.884 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.884 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.884 00:17:36.884 real 0m35.493s 00:17:36.884 user 0m32.604s 00:17:36.884 sys 0m3.879s 00:17:36.884 19:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.884 19:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.884 ************************************ 00:17:36.884 END TEST nvmf_auth_host 00:17:36.884 ************************************ 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.142 ************************************ 00:17:37.142 START TEST nvmf_digest 00:17:37.142 ************************************ 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:37.142 * Looking for test storage... 00:17:37.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.142 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.143 --rc genhtml_branch_coverage=1 00:17:37.143 --rc genhtml_function_coverage=1 00:17:37.143 --rc genhtml_legend=1 00:17:37.143 --rc geninfo_all_blocks=1 00:17:37.143 --rc geninfo_unexecuted_blocks=1 00:17:37.143 00:17:37.143 ' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.143 --rc genhtml_branch_coverage=1 00:17:37.143 --rc genhtml_function_coverage=1 00:17:37.143 --rc genhtml_legend=1 00:17:37.143 --rc geninfo_all_blocks=1 00:17:37.143 --rc geninfo_unexecuted_blocks=1 00:17:37.143 00:17:37.143 ' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.143 --rc genhtml_branch_coverage=1 00:17:37.143 --rc genhtml_function_coverage=1 00:17:37.143 --rc genhtml_legend=1 00:17:37.143 --rc geninfo_all_blocks=1 00:17:37.143 --rc geninfo_unexecuted_blocks=1 00:17:37.143 00:17:37.143 ' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.143 --rc genhtml_branch_coverage=1 00:17:37.143 --rc genhtml_function_coverage=1 00:17:37.143 --rc genhtml_legend=1 00:17:37.143 --rc geninfo_all_blocks=1 00:17:37.143 --rc geninfo_unexecuted_blocks=1 00:17:37.143 00:17:37.143 ' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:37.143 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.144 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.402 Cannot find device "nvmf_init_br" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:37.402 Cannot find device "nvmf_init_br2" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:37.402 Cannot find device "nvmf_tgt_br" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.402 Cannot find device "nvmf_tgt_br2" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:37.402 Cannot find device "nvmf_init_br" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:37.402 Cannot find device "nvmf_init_br2" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:37.402 Cannot find device "nvmf_tgt_br" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:37.402 Cannot find device "nvmf_tgt_br2" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:37.402 Cannot find device "nvmf_br" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:37.402 Cannot find device "nvmf_init_if" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:37.402 Cannot find device "nvmf_init_if2" 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:37.402 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:37.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:37.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:37.661 00:17:37.661 --- 10.0.0.3 ping statistics --- 00:17:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.661 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:37.661 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:37.661 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:37.661 00:17:37.661 --- 10.0.0.4 ping statistics --- 00:17:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.661 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:37.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:37.661 00:17:37.661 --- 10.0.0.1 ping statistics --- 00:17:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.661 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:37.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:37.661 00:17:37.661 --- 10.0.0.2 ping statistics --- 00:17:37.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.661 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.661 19:25:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:37.661 ************************************ 00:17:37.661 START TEST nvmf_digest_clean 00:17:37.661 ************************************ 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79657 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79657 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79657 ']' 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.661 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.661 [2024-11-26 19:25:36.072284] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:37.661 [2024-11-26 19:25:36.072365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.919 [2024-11-26 19:25:36.227174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.919 [2024-11-26 19:25:36.277883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.919 [2024-11-26 19:25:36.277955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.919 [2024-11-26 19:25:36.277968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.919 [2024-11-26 19:25:36.277979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.919 [2024-11-26 19:25:36.277988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.919 [2024-11-26 19:25:36.278407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.919 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.919 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:37.919 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.919 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.919 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.177 [2024-11-26 19:25:36.418573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.177 null0 00:17:38.177 [2024-11-26 19:25:36.470806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.177 [2024-11-26 19:25:36.494948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79683 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79683 /var/tmp/bperf.sock 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79683 ']' 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.177 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.177 [2024-11-26 19:25:36.557548] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:38.177 [2024-11-26 19:25:36.557634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79683 ] 00:17:38.436 [2024-11-26 19:25:36.704712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.436 [2024-11-26 19:25:36.747747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.436 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.436 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:38.436 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:38.436 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:38.436 19:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:38.694 [2024-11-26 19:25:37.130834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.951 19:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.951 19:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.209 nvme0n1 00:17:39.209 19:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:39.209 19:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:39.209 Running I/O for 2 seconds... 00:17:41.521 17653.00 IOPS, 68.96 MiB/s [2024-11-26T19:25:39.961Z] 17299.50 IOPS, 67.58 MiB/s 00:17:41.521 Latency(us) 00:17:41.521 [2024-11-26T19:25:39.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.521 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:41.521 nvme0n1 : 2.01 17282.39 67.51 0.00 0.00 7401.16 1742.66 20614.05 00:17:41.521 [2024-11-26T19:25:39.961Z] =================================================================================================================== 00:17:41.521 [2024-11-26T19:25:39.961Z] Total : 17282.39 67.51 0.00 0.00 7401.16 1742.66 20614.05 00:17:41.521 { 00:17:41.521 "results": [ 00:17:41.521 { 00:17:41.521 "job": "nvme0n1", 00:17:41.521 "core_mask": "0x2", 00:17:41.521 "workload": "randread", 00:17:41.521 "status": "finished", 00:17:41.521 "queue_depth": 128, 00:17:41.521 "io_size": 4096, 00:17:41.521 "runtime": 2.009387, 00:17:41.521 "iops": 17282.38512541387, 00:17:41.521 "mibps": 67.50931689614794, 00:17:41.521 "io_failed": 0, 00:17:41.521 "io_timeout": 0, 00:17:41.521 "avg_latency_us": 7401.159339785391, 00:17:41.521 "min_latency_us": 1742.6618181818183, 00:17:41.521 "max_latency_us": 20614.05090909091 00:17:41.521 } 00:17:41.521 ], 00:17:41.521 "core_count": 1 00:17:41.521 } 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:41.521 | select(.opcode=="crc32c") 00:17:41.521 | "\(.module_name) \(.executed)"' 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79683 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79683 ']' 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79683 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79683 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.521 killing process with pid 79683 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79683' 00:17:41.521 Received shutdown signal, test time was about 2.000000 seconds 00:17:41.521 00:17:41.521 Latency(us) 00:17:41.521 [2024-11-26T19:25:39.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.521 [2024-11-26T19:25:39.961Z] =================================================================================================================== 00:17:41.521 [2024-11-26T19:25:39.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79683 00:17:41.521 19:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79683 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79730 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79730 /var/tmp/bperf.sock 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79730 ']' 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.826 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.826 [2024-11-26 19:25:40.162865] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:41.826 [2024-11-26 19:25:40.162994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79730 ] 00:17:41.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:41.826 Zero copy mechanism will not be used. 00:17:42.105 [2024-11-26 19:25:40.307726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.105 [2024-11-26 19:25:40.354791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:43.040 [2024-11-26 19:25:41.362083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.040 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.299 nvme0n1 00:17:43.299 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:43.299 19:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:43.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:43.557 Zero copy mechanism will not be used. 00:17:43.557 Running I/O for 2 seconds... 00:17:45.431 7120.00 IOPS, 890.00 MiB/s [2024-11-26T19:25:43.871Z] 7048.00 IOPS, 881.00 MiB/s 00:17:45.431 Latency(us) 00:17:45.431 [2024-11-26T19:25:43.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.431 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:45.431 nvme0n1 : 2.00 7050.24 881.28 0.00 0.00 2266.45 1980.97 6315.29 00:17:45.431 [2024-11-26T19:25:43.871Z] =================================================================================================================== 00:17:45.431 [2024-11-26T19:25:43.871Z] Total : 7050.24 881.28 0.00 0.00 2266.45 1980.97 6315.29 00:17:45.431 { 00:17:45.431 "results": [ 00:17:45.431 { 00:17:45.431 "job": "nvme0n1", 00:17:45.431 "core_mask": "0x2", 00:17:45.431 "workload": "randread", 00:17:45.431 "status": "finished", 00:17:45.431 "queue_depth": 16, 00:17:45.431 "io_size": 131072, 00:17:45.431 "runtime": 2.001633, 00:17:45.431 "iops": 7050.243476201681, 00:17:45.431 "mibps": 881.2804345252101, 00:17:45.431 "io_failed": 0, 00:17:45.431 "io_timeout": 0, 00:17:45.431 "avg_latency_us": 2266.445945165945, 00:17:45.431 "min_latency_us": 1980.9745454545455, 00:17:45.431 "max_latency_us": 6315.2872727272725 00:17:45.431 } 00:17:45.431 ], 00:17:45.431 "core_count": 1 00:17:45.431 } 00:17:45.431 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:45.431 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:45.431 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:45.431 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:45.431 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:45.431 | select(.opcode=="crc32c") 00:17:45.431 | "\(.module_name) \(.executed)"' 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79730 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79730 ']' 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79730 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.690 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79730 00:17:45.948 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:45.948 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:45.948 killing process with pid 79730 00:17:45.948 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79730' 00:17:45.948 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.948 00:17:45.948 Latency(us) 00:17:45.948 [2024-11-26T19:25:44.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.948 [2024-11-26T19:25:44.388Z] =================================================================================================================== 00:17:45.948 [2024-11-26T19:25:44.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.949 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79730 00:17:45.949 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79730 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79790 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79790 /var/tmp/bperf.sock 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79790 ']' 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.208 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.209 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.209 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.209 [2024-11-26 19:25:44.465727] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:46.209 [2024-11-26 19:25:44.465819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79790 ] 00:17:46.209 [2024-11-26 19:25:44.606951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.469 [2024-11-26 19:25:44.661367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.469 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.469 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:46.469 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:46.469 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:46.469 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:46.728 [2024-11-26 19:25:44.997780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.728 19:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.728 19:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.986 nvme0n1 00:17:46.986 19:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:46.986 19:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:47.245 Running I/O for 2 seconds... 00:17:49.117 17527.00 IOPS, 68.46 MiB/s [2024-11-26T19:25:47.557Z] 18542.50 IOPS, 72.43 MiB/s 00:17:49.117 Latency(us) 00:17:49.117 [2024-11-26T19:25:47.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.117 nvme0n1 : 2.00 18582.36 72.59 0.00 0.00 6883.06 2398.02 16205.27 00:17:49.117 [2024-11-26T19:25:47.557Z] =================================================================================================================== 00:17:49.117 [2024-11-26T19:25:47.557Z] Total : 18582.36 72.59 0.00 0.00 6883.06 2398.02 16205.27 00:17:49.117 { 00:17:49.117 "results": [ 00:17:49.117 { 00:17:49.117 "job": "nvme0n1", 00:17:49.117 "core_mask": "0x2", 00:17:49.117 "workload": "randwrite", 00:17:49.117 "status": "finished", 00:17:49.117 "queue_depth": 128, 00:17:49.117 "io_size": 4096, 00:17:49.117 "runtime": 2.002598, 00:17:49.117 "iops": 18582.361512395397, 00:17:49.117 "mibps": 72.58734965779452, 00:17:49.117 "io_failed": 0, 00:17:49.117 "io_timeout": 0, 00:17:49.117 "avg_latency_us": 6883.058515914526, 00:17:49.117 "min_latency_us": 2398.021818181818, 00:17:49.117 "max_latency_us": 16205.265454545455 00:17:49.117 } 00:17:49.117 ], 00:17:49.117 "core_count": 1 00:17:49.117 } 00:17:49.117 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:49.375 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:49.375 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:49.375 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:49.375 | select(.opcode=="crc32c") 00:17:49.375 | "\(.module_name) \(.executed)"' 00:17:49.375 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79790 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79790 ']' 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79790 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79790 00:17:49.635 killing process with pid 79790 00:17:49.635 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.635 00:17:49.635 Latency(us) 00:17:49.635 [2024-11-26T19:25:48.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.635 [2024-11-26T19:25:48.075Z] =================================================================================================================== 00:17:49.635 [2024-11-26T19:25:48.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79790' 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79790 00:17:49.635 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79790 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79844 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79844 /var/tmp/bperf.sock 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79844 ']' 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.895 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.895 [2024-11-26 19:25:48.155589] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:49.895 [2024-11-26 19:25:48.155916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79844 ] 00:17:49.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:49.895 Zero copy mechanism will not be used. 00:17:49.895 [2024-11-26 19:25:48.296485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.154 [2024-11-26 19:25:48.343766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.154 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.154 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:50.154 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.154 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.155 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:50.413 [2024-11-26 19:25:48.629026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.413 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.413 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.670 nvme0n1 00:17:50.670 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.670 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:50.929 Zero copy mechanism will not be used. 00:17:50.929 Running I/O for 2 seconds... 00:17:52.795 5603.00 IOPS, 700.38 MiB/s [2024-11-26T19:25:51.235Z] 5577.00 IOPS, 697.12 MiB/s 00:17:52.795 Latency(us) 00:17:52.795 [2024-11-26T19:25:51.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.795 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:52.795 nvme0n1 : 2.00 5573.99 696.75 0.00 0.00 2864.77 2219.29 5749.29 00:17:52.795 [2024-11-26T19:25:51.235Z] =================================================================================================================== 00:17:52.795 [2024-11-26T19:25:51.235Z] Total : 5573.99 696.75 0.00 0.00 2864.77 2219.29 5749.29 00:17:52.795 { 00:17:52.795 "results": [ 00:17:52.795 { 00:17:52.795 "job": "nvme0n1", 00:17:52.795 "core_mask": "0x2", 00:17:52.795 "workload": "randwrite", 00:17:52.795 "status": "finished", 00:17:52.795 "queue_depth": 16, 00:17:52.795 "io_size": 131072, 00:17:52.795 "runtime": 2.00395, 00:17:52.795 "iops": 5573.991367050076, 00:17:52.795 "mibps": 696.7489208812596, 00:17:52.795 "io_failed": 0, 00:17:52.795 "io_timeout": 0, 00:17:52.795 "avg_latency_us": 2864.76846878815, 00:17:52.795 "min_latency_us": 2219.287272727273, 00:17:52.795 "max_latency_us": 5749.294545454545 00:17:52.795 } 00:17:52.795 ], 00:17:52.795 "core_count": 1 00:17:52.795 } 00:17:52.795 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:52.795 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:52.795 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:52.795 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:52.795 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:52.795 | select(.opcode=="crc32c") 00:17:52.795 | "\(.module_name) \(.executed)"' 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79844 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79844 ']' 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79844 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.077 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79844 00:17:53.078 killing process with pid 79844 00:17:53.078 Received shutdown signal, test time was about 2.000000 seconds 00:17:53.078 00:17:53.078 Latency(us) 00:17:53.078 [2024-11-26T19:25:51.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.078 [2024-11-26T19:25:51.518Z] =================================================================================================================== 00:17:53.078 [2024-11-26T19:25:51.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.078 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.078 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.078 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79844' 00:17:53.078 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79844 00:17:53.078 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79844 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79657 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79657 ']' 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79657 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79657 00:17:53.338 killing process with pid 79657 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79657' 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79657 00:17:53.338 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79657 00:17:53.597 00:17:53.597 real 0m15.927s 00:17:53.597 user 0m29.675s 00:17:53.597 sys 0m5.524s 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.597 ************************************ 00:17:53.597 END TEST nvmf_digest_clean 00:17:53.597 ************************************ 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:53.597 ************************************ 00:17:53.597 START TEST nvmf_digest_error 00:17:53.597 ************************************ 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79920 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79920 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79920 ']' 00:17:53.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.597 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.854 [2024-11-26 19:25:52.046607] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:53.854 [2024-11-26 19:25:52.046691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.854 [2024-11-26 19:25:52.195392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.854 [2024-11-26 19:25:52.249805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.854 [2024-11-26 19:25:52.250204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.854 [2024-11-26 19:25:52.250418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.854 [2024-11-26 19:25:52.250581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.854 [2024-11-26 19:25:52.250624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.854 [2024-11-26 19:25:52.251210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.854 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.854 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:53.854 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.854 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.854 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.112 [2024-11-26 19:25:52.323916] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.112 [2024-11-26 19:25:52.390482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.112 null0 00:17:54.112 [2024-11-26 19:25:52.443476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.112 [2024-11-26 19:25:52.467641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79943 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79943 /var/tmp/bperf.sock 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79943 ']' 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.112 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.112 [2024-11-26 19:25:52.521025] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:54.112 [2024-11-26 19:25:52.521247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79943 ] 00:17:54.370 [2024-11-26 19:25:52.665495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.370 [2024-11-26 19:25:52.708801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.370 [2024-11-26 19:25:52.761074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.629 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.629 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:54.629 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:54.629 19:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:54.629 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:54.629 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.629 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.887 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.887 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.887 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.145 nvme0n1 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:55.145 19:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.145 Running I/O for 2 seconds... 00:17:55.145 [2024-11-26 19:25:53.476519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.145 [2024-11-26 19:25:53.476734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.145 [2024-11-26 19:25:53.476861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.145 [2024-11-26 19:25:53.491493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.145 [2024-11-26 19:25:53.491726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.145 [2024-11-26 19:25:53.491886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.145 [2024-11-26 19:25:53.506666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.145 [2024-11-26 19:25:53.506852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.145 [2024-11-26 19:25:53.507026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.145 [2024-11-26 19:25:53.522659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.145 [2024-11-26 19:25:53.522842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.146 [2024-11-26 19:25:53.523028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.146 [2024-11-26 19:25:53.539654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.146 [2024-11-26 19:25:53.539869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.146 [2024-11-26 19:25:53.540019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.146 [2024-11-26 19:25:53.557430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.146 [2024-11-26 19:25:53.557613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.146 [2024-11-26 19:25:53.557647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.146 [2024-11-26 19:25:53.573729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.146 [2024-11-26 19:25:53.573764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.146 [2024-11-26 19:25:53.573791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.404 [2024-11-26 19:25:53.589772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.404 [2024-11-26 19:25:53.589946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.404 [2024-11-26 19:25:53.589977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.404 [2024-11-26 19:25:53.604992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.404 [2024-11-26 19:25:53.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.404 [2024-11-26 19:25:53.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.404 [2024-11-26 19:25:53.620079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.620114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.620140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.634989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.635022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.635049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.649922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.649954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.649982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.664879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.664957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.664971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.679887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.679946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.694767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.694960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.694993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.710130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.710170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.710197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.724233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.724265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.724292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.738231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.738263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.738290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.752921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.752953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.752980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.767996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.768048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.768077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.782677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.782713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.782741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.796923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.796954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.796982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.812399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.812430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.812457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.826445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.826476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.826502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.405 [2024-11-26 19:25:53.841118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.405 [2024-11-26 19:25:53.841150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-11-26 19:25:53.841178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.855908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.855967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.855980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.871944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.872033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.872062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.889133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.889169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.889199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.905203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.905237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.920530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.920876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.920907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.935701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.935737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.949970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.950003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.950031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.963929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.964004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.964030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.978172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.978206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.978234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:53.992461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:53.992493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:53.992520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.006601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.006660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.020923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.020954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.020981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.034999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.035030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.035057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.049119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.049150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.049177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.063482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.063538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.063552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.078700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.078734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.078761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.664 [2024-11-26 19:25:54.092892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.664 [2024-11-26 19:25:54.092930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.664 [2024-11-26 19:25:54.092957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.923 [2024-11-26 19:25:54.108504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.108535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.108562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.122666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.122697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.122724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.137778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.137809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.137836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.152180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.152210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.166179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.166228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.166254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.180287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.180319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.180346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.194287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.194333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.194360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.208277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.208309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.208336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.222378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.222409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.222435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.236408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.236439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.250524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.250556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.250583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.264697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.264729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.264756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.278799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.278858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.292923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.292954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.292981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.306809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.306841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.306868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.321124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.321157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.321168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.336367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.336398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.336425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.924 [2024-11-26 19:25:54.350874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:55.924 [2024-11-26 19:25:54.350933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.924 [2024-11-26 19:25:54.350961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.366220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.366253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.366279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.380563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.380595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.380623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.394958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.395010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.395025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.415619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.415655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.415683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.429814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.429873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.454477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.454509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.454520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 16838.00 IOPS, 65.77 MiB/s [2024-11-26T19:25:54.623Z] [2024-11-26 19:25:54.467101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.467132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.467143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.484718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.484750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.484761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.503498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.503704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.503721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.521798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.521831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.521841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.539783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.539816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.539859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.558025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.558056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.558083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.183 [2024-11-26 19:25:54.575970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.183 [2024-11-26 19:25:54.576001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.183 [2024-11-26 19:25:54.576011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.184 [2024-11-26 19:25:54.591043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.184 [2024-11-26 19:25:54.591203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.184 [2024-11-26 19:25:54.591217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.184 [2024-11-26 19:25:54.605104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.184 [2024-11-26 19:25:54.605137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.184 [2024-11-26 19:25:54.605148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.184 [2024-11-26 19:25:54.619125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.184 [2024-11-26 19:25:54.619174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.184 [2024-11-26 19:25:54.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.633550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.633600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.633625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.647395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.647600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.647615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.661365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.661397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.661408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.674950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.674981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.675007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.688632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.688664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.688675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.702475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.702509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.702536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.717548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.717583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.717610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.732439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.732470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.732497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.747038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.747070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.747097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.761198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.761255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.775272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.775306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.775334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.789214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.789247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.789274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.802987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.803019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.803046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.823136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.441 [2024-11-26 19:25:54.823168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.441 [2024-11-26 19:25:54.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.441 [2024-11-26 19:25:54.837292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.442 [2024-11-26 19:25:54.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.442 [2024-11-26 19:25:54.837350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.442 [2024-11-26 19:25:54.851217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.442 [2024-11-26 19:25:54.851371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.442 [2024-11-26 19:25:54.851403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.442 [2024-11-26 19:25:54.865956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.442 [2024-11-26 19:25:54.865989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.442 [2024-11-26 19:25:54.866016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.699 [2024-11-26 19:25:54.880352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.699 [2024-11-26 19:25:54.880385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.699 [2024-11-26 19:25:54.880411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.699 [2024-11-26 19:25:54.895009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.895042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.895069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.910148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.910182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.910210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.924909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.924966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.924993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.941254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.941406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.941436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.958998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.959194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.959209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.977123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.977155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.977166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:54.993823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:54.994010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:54.994026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.015295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.015328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.015339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.034102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.034143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.034155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.052852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.052884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.052906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.070088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.070121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.070131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.085783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.085815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.085825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.101826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.101858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.101868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.118161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.118194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.118221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.700 [2024-11-26 19:25:55.134426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.700 [2024-11-26 19:25:55.134460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.700 [2024-11-26 19:25:55.134472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.959 [2024-11-26 19:25:55.150756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.959 [2024-11-26 19:25:55.150791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.959 [2024-11-26 19:25:55.150801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.959 [2024-11-26 19:25:55.165239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.959 [2024-11-26 19:25:55.165271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.959 [2024-11-26 19:25:55.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.959 [2024-11-26 19:25:55.179070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.959 [2024-11-26 19:25:55.179229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.959 [2024-11-26 19:25:55.179260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.959 [2024-11-26 19:25:55.193544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.959 [2024-11-26 19:25:55.193577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.959 [2024-11-26 19:25:55.193587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.959 [2024-11-26 19:25:55.207522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.959 [2024-11-26 19:25:55.207571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.207598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.221408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.221439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.221449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.235285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.235429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.235459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.250031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.250187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.250202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.264827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.265014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.278806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.278839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.278850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.292822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.292853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.292864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.308517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.308548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.308560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.323171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.323344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.323374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.338141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.338331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.338348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.353007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.353163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.367899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.368120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.368137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.960 [2024-11-26 19:25:55.383052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:56.960 [2024-11-26 19:25:55.383228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.960 [2024-11-26 19:25:55.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 [2024-11-26 19:25:55.398430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:57.218 [2024-11-26 19:25:55.398462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.218 [2024-11-26 19:25:55.398503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 [2024-11-26 19:25:55.413150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:57.218 [2024-11-26 19:25:55.413293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.218 [2024-11-26 19:25:55.413324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 [2024-11-26 19:25:55.428027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:57.218 [2024-11-26 19:25:55.428058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.218 [2024-11-26 19:25:55.428068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 [2024-11-26 19:25:55.442076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:57.218 [2024-11-26 19:25:55.442108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.218 [2024-11-26 19:25:55.442118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 16669.00 IOPS, 65.11 MiB/s [2024-11-26T19:25:55.658Z] [2024-11-26 19:25:55.457290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2431050) 00:17:57.218 [2024-11-26 19:25:55.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.218 [2024-11-26 19:25:55.457332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.218 00:17:57.218 Latency(us) 00:17:57.218 [2024-11-26T19:25:55.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.218 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:57.218 nvme0n1 : 2.00 16708.75 65.27 0.00 0.00 7655.38 1787.35 29550.78 00:17:57.218 [2024-11-26T19:25:55.658Z] =================================================================================================================== 00:17:57.218 [2024-11-26T19:25:55.658Z] Total : 16708.75 65.27 0.00 0.00 7655.38 1787.35 29550.78 00:17:57.218 { 00:17:57.218 "results": [ 00:17:57.218 { 00:17:57.218 "job": "nvme0n1", 00:17:57.218 "core_mask": "0x2", 00:17:57.218 "workload": "randread", 00:17:57.218 "status": "finished", 00:17:57.218 "queue_depth": 128, 00:17:57.218 "io_size": 4096, 00:17:57.218 "runtime": 2.002903, 00:17:57.218 "iops": 16708.747253361744, 00:17:57.218 "mibps": 65.26854395844431, 00:17:57.218 "io_failed": 0, 00:17:57.218 "io_timeout": 0, 00:17:57.218 "avg_latency_us": 7655.382453181791, 00:17:57.218 "min_latency_us": 1787.3454545454545, 00:17:57.218 "max_latency_us": 29550.778181818183 00:17:57.218 } 00:17:57.218 ], 00:17:57.218 "core_count": 1 00:17:57.218 } 00:17:57.218 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:57.218 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:57.218 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:57.218 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:57.218 | .driver_specific 00:17:57.218 | .nvme_error 00:17:57.218 | .status_code 00:17:57.218 | .command_transient_transport_error' 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79943 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79943 ']' 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79943 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79943 00:17:57.477 killing process with pid 79943 00:17:57.477 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.477 00:17:57.477 Latency(us) 00:17:57.477 [2024-11-26T19:25:55.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.477 [2024-11-26T19:25:55.917Z] =================================================================================================================== 00:17:57.477 [2024-11-26T19:25:55.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79943' 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79943 00:17:57.477 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79943 00:17:57.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79996 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79996 /var/tmp/bperf.sock 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79996 ']' 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.736 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.736 [2024-11-26 19:25:56.009123] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:17:57.736 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:57.736 Zero copy mechanism will not be used. 00:17:57.736 [2024-11-26 19:25:56.009365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79996 ] 00:17:57.736 [2024-11-26 19:25:56.149348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.995 [2024-11-26 19:25:56.193360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.995 [2024-11-26 19:25:56.244768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.995 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.995 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:57.995 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:57.995 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.253 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.512 nvme0n1 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:58.512 19:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:58.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:58.772 Zero copy mechanism will not be used. 00:17:58.772 Running I/O for 2 seconds... 00:17:58.772 [2024-11-26 19:25:57.029588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.772 [2024-11-26 19:25:57.029632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.772 [2024-11-26 19:25:57.029645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.772 [2024-11-26 19:25:57.034609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.772 [2024-11-26 19:25:57.034760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.772 [2024-11-26 19:25:57.034791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.772 [2024-11-26 19:25:57.039740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.772 [2024-11-26 19:25:57.039777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.772 [2024-11-26 19:25:57.039789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.772 [2024-11-26 19:25:57.044444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.772 [2024-11-26 19:25:57.044479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.772 [2024-11-26 19:25:57.044521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.772 [2024-11-26 19:25:57.049493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.049674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.054886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.055116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.055266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.061030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.061184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.061519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.067024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.067193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.072959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.073178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.073393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.079049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.079225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.079435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.084714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.084897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.090276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.090455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.090611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.096015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.096187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.096203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.101010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.101209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.101333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.106145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.106341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.106463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.111255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.111428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.111577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.116521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.116694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.116711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.121961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.122169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.122291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.127495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.127719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.127855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.132746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.132969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.133098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.137867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.138111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.138229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.143062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.143242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.143358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.148260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.148429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.148548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.153386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.153566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.158541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.158757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.163579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.163750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.168772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.168986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.169095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.173684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.173718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.173751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.178311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.178343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.178370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.182785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.182818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.182844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.773 [2024-11-26 19:25:57.187627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.773 [2024-11-26 19:25:57.187809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.773 [2024-11-26 19:25:57.187826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.774 [2024-11-26 19:25:57.192440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.774 [2024-11-26 19:25:57.192474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.774 [2024-11-26 19:25:57.192502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.774 [2024-11-26 19:25:57.197013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.774 [2024-11-26 19:25:57.197060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.774 [2024-11-26 19:25:57.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:58.774 [2024-11-26 19:25:57.201455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.774 [2024-11-26 19:25:57.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.774 [2024-11-26 19:25:57.201516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.774 [2024-11-26 19:25:57.206227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:58.774 [2024-11-26 19:25:57.206274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.774 [2024-11-26 19:25:57.206302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.048 [2024-11-26 19:25:57.211974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.048 [2024-11-26 19:25:57.212058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.048 [2024-11-26 19:25:57.212101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.048 [2024-11-26 19:25:57.217126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.048 [2024-11-26 19:25:57.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.048 [2024-11-26 19:25:57.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.048 [2024-11-26 19:25:57.221804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.048 [2024-11-26 19:25:57.221850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.048 [2024-11-26 19:25:57.221877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.048 [2024-11-26 19:25:57.226689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.226722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.226748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.232294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.232346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.232374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.237290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.237339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.237366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.242097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.242145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.242172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.246781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.246813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.246839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.251622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.251674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.251702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.256497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.256546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.256557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.261097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.261131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.265697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.265756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.265784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.270595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.270641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.270668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.275013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.275059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.279710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.279745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.284181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.284228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.284255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.288771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.288801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.288828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.293290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.293320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.293346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.297794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.297824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.297850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.302234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.302265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.302292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.306714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.306745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.306772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.311078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.311124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.311151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.315673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.315722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.315734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.320233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.320263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.049 [2024-11-26 19:25:57.320290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.049 [2024-11-26 19:25:57.324738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.049 [2024-11-26 19:25:57.324769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.324796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.329077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.329107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.329135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.333596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.333642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.333669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.338160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.338206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.338233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.342899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.342978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.343006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.347609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.347646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.347659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.352212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.352242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.352269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.356574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.356605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.356632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.360875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.360917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.360944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.365202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.365275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.369850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.369881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.369916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.374697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.374746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.374758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.379462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.379531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.384162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.384192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.384219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.388602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.388632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.388659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.393035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.393065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.393092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.397252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.397283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.397310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.401632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.401663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.401691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.405967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.405997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.406027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.410216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.410247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.410273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.414524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.050 [2024-11-26 19:25:57.414555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.050 [2024-11-26 19:25:57.414582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.050 [2024-11-26 19:25:57.418763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.418794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.418821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.423070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.423116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.423144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.427447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.427478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.427512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.432417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.432465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.432492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.437155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.437186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.437213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.441614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.441645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.441672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.446004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.446062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.446090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.450438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.450495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.454734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.454764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.454791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.458995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.459068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.463348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.463378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.463405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.467636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.467669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.467696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.472024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.472054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.472080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.476275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.476305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.476332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.480491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.480521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.480548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.051 [2024-11-26 19:25:57.485118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.051 [2024-11-26 19:25:57.485149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.051 [2024-11-26 19:25:57.485176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.489557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.489588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.489614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.494049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.494080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.494106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.498350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.498380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.502611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.502642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.502668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.507578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.507613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.507626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.512649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.512683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.512711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.517608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.517642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.517655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.522307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.522342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.311 [2024-11-26 19:25:57.522356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.311 [2024-11-26 19:25:57.527052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.311 [2024-11-26 19:25:57.527101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.527129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.531446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.531494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.531562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.536001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.536057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.540259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.540289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.540316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.544478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.544509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.544536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.548775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.548832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.553028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.553077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.553088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.557392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.557423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.557450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.561725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.561756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.561783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.566049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.566079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.566107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.570394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.570426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.570453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.574751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.574782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.574808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.579057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.579103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.579130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.583572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.583606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.583634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.587846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.587935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.592139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.592198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.596398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.596428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.596455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.600666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.600696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.600723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.604928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.604958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.604984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.609151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.609197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.609224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.613413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.613443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.613469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.617876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.617915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.617943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.622097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.622128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.622154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.626577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.626608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.626634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.630938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.630983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.631010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.635244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.635305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.635332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.639729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.639765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.639794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.644062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.644118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.648422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.648452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.648478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.652743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.652774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.652801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.656903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.656932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.661124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.661154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.665796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.665855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.670146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.670192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.670218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.674557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.674588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.674615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.678892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.678946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.678973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.683156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.683203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.683230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.687413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.687443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.687470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.691893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.691966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.691993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.696398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.696428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.696454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.700710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.700741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.700767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.705122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.705153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.312 [2024-11-26 19:25:57.705179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.312 [2024-11-26 19:25:57.709545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.312 [2024-11-26 19:25:57.709575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.714192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.714223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.714250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.718601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.718631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.718658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.722990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.723036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.727501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.727598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.732165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.732194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.732220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.736847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.736877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.736904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.741387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.741418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.741444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.313 [2024-11-26 19:25:57.746272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.313 [2024-11-26 19:25:57.746319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.313 [2024-11-26 19:25:57.746346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.572 [2024-11-26 19:25:57.751094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.572 [2024-11-26 19:25:57.751140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.572 [2024-11-26 19:25:57.751167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.572 [2024-11-26 19:25:57.756106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.572 [2024-11-26 19:25:57.756136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.572 [2024-11-26 19:25:57.756163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.572 [2024-11-26 19:25:57.760670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.572 [2024-11-26 19:25:57.760700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.572 [2024-11-26 19:25:57.760726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.572 [2024-11-26 19:25:57.765204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.572 [2024-11-26 19:25:57.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.765261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.769789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.769819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.769846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.774198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.774228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.774254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.778626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.778656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.783033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.783079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.783106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.787997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.788045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.788063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.792626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.792656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.792682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.796952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.796981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.797008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.801287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.801317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.801343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.805961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.806018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.810601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.810631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.810657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.606 [2024-11-26 19:25:57.815129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.606 [2024-11-26 19:25:57.815175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.606 [2024-11-26 19:25:57.815201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.819879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.819937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.819980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.824485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.824531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.824558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.829128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.829158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.829184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.833802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.833832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.833859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.838390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.838446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.842963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.843008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.843035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.847424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.847456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.847484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.852141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.852172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.852199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.856767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.856798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.856824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.861247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.861277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.861304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.865826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.870418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.870448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.870474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.875189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.875236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.875262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.879824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.879886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.879923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.884504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.884535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.884561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.889207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.889237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.889263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.893772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.893802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.893828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.898392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.898422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.898449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.903077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.903129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.903156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.907933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.908006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.908045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.912610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.912684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.917208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.917238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.921834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.921880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.921906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.926645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.926675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.926701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.931135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.935800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.935863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.935890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.940339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.940369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.940396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.945020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.945050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.945076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.949892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.949974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.950003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.954779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.954809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.954837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.959968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.960036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.960064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.965245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.965323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.965350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.970495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.970552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.975493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.975564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.975592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.980341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.980372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.980398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.985139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.985187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.985215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.990036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.990083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.990114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.994708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.994750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.994777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:57.999435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:57.999466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:57.999493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.607 [2024-11-26 19:25:58.004162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.607 [2024-11-26 19:25:58.004192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.607 [2024-11-26 19:25:58.004219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.868 [2024-11-26 19:25:58.009294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.868 [2024-11-26 19:25:58.009342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.868 [2024-11-26 19:25:58.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.868 [2024-11-26 19:25:58.014046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.868 [2024-11-26 19:25:58.014093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.868 [2024-11-26 19:25:58.014119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.868 [2024-11-26 19:25:58.018661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.868 [2024-11-26 19:25:58.018691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.868 [2024-11-26 19:25:58.018702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.868 [2024-11-26 19:25:58.023352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.023398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.023426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 6634.00 IOPS, 829.25 MiB/s [2024-11-26T19:25:58.309Z] [2024-11-26 19:25:58.029357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.029405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.029432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.033966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.034013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.034040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.038407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.038437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.038463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.042807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.042837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.042863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.047110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.047170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.051689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.051724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.051736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.056051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.056080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.056107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.060379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.060409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.060436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.064774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.064806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.064832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.069155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.069185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.069211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.073515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.073545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.073572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.077845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.077876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.077902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.082837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.082886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.082910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.088066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.088114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.088141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.093299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.093347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.093373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.098669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.098702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.104030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.104090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.109546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.109578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.109605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.114546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.114593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.114621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.119422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.119469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.119496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.124411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.124458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.124487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.129251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.129297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.129323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.133871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.133944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.869 [2024-11-26 19:25:58.133972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.869 [2024-11-26 19:25:58.138357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.869 [2024-11-26 19:25:58.138389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.142838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.142871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.142912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.147314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.147388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.151759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.151793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.151835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.156263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.156309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.156336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.160584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.160630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.160657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.164920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.164975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.165002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.169279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.169326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.169353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.173615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.173660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.173687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.178086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.178133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.178160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.182495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.182542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.182569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.186883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.186952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.186980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.191304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.191350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.191376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.195934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.196003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.196030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.200473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.200519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.200546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.204814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.204861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.204888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.209585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.209633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.209660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.214238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.214286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.214313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.218987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.219034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.219061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.223427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.223474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.223501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.228020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.228067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.228095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.232758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.232804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.232831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.237458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.237505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.242288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.242367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.242394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.870 [2024-11-26 19:25:58.247128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.870 [2024-11-26 19:25:58.247176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.870 [2024-11-26 19:25:58.247203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.251904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.252005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.256872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.256972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.257002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.261503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.261553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.261581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.266722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.266773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.271568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.271617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.271645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.276351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.276398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.276424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.281067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.281115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.281142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.285366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.285412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.285439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.289878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.289936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.289963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.294323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.294370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.294397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.298736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.298784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.298811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.871 [2024-11-26 19:25:58.303369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:17:59.871 [2024-11-26 19:25:58.303416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.871 [2024-11-26 19:25:58.303444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.131 [2024-11-26 19:25:58.308243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.131 [2024-11-26 19:25:58.308293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.131 [2024-11-26 19:25:58.308305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.131 [2024-11-26 19:25:58.312962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.131 [2024-11-26 19:25:58.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.131 [2024-11-26 19:25:58.313062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.131 [2024-11-26 19:25:58.317313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.131 [2024-11-26 19:25:58.317360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.131 [2024-11-26 19:25:58.317386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.131 [2024-11-26 19:25:58.321723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.321795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.326434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.326482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.326509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.330787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.330834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.335232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.335279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.335306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.340060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.340106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.340133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.344761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.344809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.344835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.349412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.349484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.353988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.354048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.354076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.358642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.358688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.358729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.363619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.363653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.363682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.368531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.368577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.368604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.373290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.373351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.373378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.378120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.378174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.378201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.383141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.383188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.383214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.387909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.387965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.387993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.392731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.392773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.392800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.397434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.397468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.402246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.402279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.402306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.407042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.407095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.407123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.411806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.411839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.411868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.416596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.416645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.132 [2024-11-26 19:25:58.416673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.132 [2024-11-26 19:25:58.421396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.132 [2024-11-26 19:25:58.421443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.421471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.426259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.426307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.426334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.431024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.431056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.431084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.435776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.435825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.435867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.440991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.441024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.441052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.445851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.445883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.445937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.450676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.450721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.455456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.455555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.460282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.460314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.460341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.464995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.465027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.465066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.469795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.469827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.469855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.474653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.474685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.474712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.479483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.479570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.479598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.484486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.484518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.484545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.489279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.489311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.489339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.494065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.494111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.494138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.498840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.498873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.503730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.503764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.503792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.508534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.508566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.508593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.513299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.133 [2024-11-26 19:25:58.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.133 [2024-11-26 19:25:58.518103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.133 [2024-11-26 19:25:58.518138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.522843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.522930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.527634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.527682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.527711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.532440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.532500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.537231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.537263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.537275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.542198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.542230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.542257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.546819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.546853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.546880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.551330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.551363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.551390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.555694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.555727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.555755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.560200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.560245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.560272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.134 [2024-11-26 19:25:58.564751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.134 [2024-11-26 19:25:58.564784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.134 [2024-11-26 19:25:58.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.395 [2024-11-26 19:25:58.569360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.395 [2024-11-26 19:25:58.569406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.395 [2024-11-26 19:25:58.569448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.395 [2024-11-26 19:25:58.573938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.395 [2024-11-26 19:25:58.573980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.395 [2024-11-26 19:25:58.574009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.395 [2024-11-26 19:25:58.578568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.578601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.578628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.582945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.582990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.583017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.587215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.587246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.587273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.591428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.591474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.591501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.595813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.595879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.595907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.600204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.600235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.600261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.604592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.604625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.604652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.608886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.608955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.613184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.613217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.613244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.617532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.617564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.617591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.621920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.621952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.621980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.626280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.626341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.626368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.630632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.630665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.630692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.634993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.635025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.635052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.639328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.639360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.639386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.643661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.643707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.648496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.648556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.653001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.653033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.653059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.657441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.657473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.657500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.661943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.662018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.666562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.671158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.671190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.671217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.675812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.675860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.675887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.680514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.680546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.680573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.685029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.685061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.685096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.689421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.689454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.396 [2024-11-26 19:25:58.689482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.396 [2024-11-26 19:25:58.693800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.396 [2024-11-26 19:25:58.693832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.698172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.698204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.698231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.702589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.702621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.706940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.706972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.706999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.711433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.711465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.711492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.716135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.716181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.716208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.720743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.720776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.720804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.725386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.725418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.725445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.730021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.730053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.730080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.734476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.734535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.738841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.738887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.743410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.743468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.747812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.747889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.752273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.752321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.752348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.756683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.756716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.756742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.760960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.760992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.761019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.765228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.765288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.769517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.769576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.773875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.773962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.778413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.778445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.778472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.782790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.782822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.782849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.787194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.787226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.787253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.791586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.791618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.791645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.795973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.796004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.796031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.800264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.800311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.800338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.804602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.397 [2024-11-26 19:25:58.804634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.397 [2024-11-26 19:25:58.804661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.397 [2024-11-26 19:25:58.808949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.808981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.809008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.398 [2024-11-26 19:25:58.813256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.813288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.813315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.398 [2024-11-26 19:25:58.817521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.817552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.817580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.398 [2024-11-26 19:25:58.821839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.821871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.821912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.398 [2024-11-26 19:25:58.826193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.826226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.826254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.398 [2024-11-26 19:25:58.830780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.398 [2024-11-26 19:25:58.830812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.398 [2024-11-26 19:25:58.830839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.835294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.835326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.835352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.840037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.840068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.840095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.844431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.844463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.844490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.848798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.848830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.848857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.853054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.853085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.853112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.857340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.857385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.857412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.861756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.861788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.861815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.866196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.866257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.870530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.870562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.870590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.874865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.874923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.874935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.879221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.879253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.879279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.883449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.883495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.883545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.657 [2024-11-26 19:25:58.887891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.657 [2024-11-26 19:25:58.887947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.657 [2024-11-26 19:25:58.887975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.892209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.892241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.892268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.896514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.896547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.896574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.900836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.900868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.900895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.905187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.905219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.905247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.909501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.909546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.909573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.914133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.914180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.914207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.918684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.918716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.918746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.923201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.923234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.923261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.927896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.927954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.927981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.932722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.932764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.932791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.937553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.937586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.937613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.942177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.942209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.942236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.946841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.946872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.946899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.951690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.951751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.956516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.956548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.956575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.961148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.961179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.961206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.965638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.965684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.965711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.970226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.970275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.970303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.974831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.974879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.974921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.979734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.979770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.979799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.984509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.984559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.984587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.989664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.989696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.989723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.994705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.994780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.658 [2024-11-26 19:25:58.999748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.658 [2024-11-26 19:25:58.999794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.658 [2024-11-26 19:25:58.999808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.659 [2024-11-26 19:25:59.004746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.004794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.659 [2024-11-26 19:25:59.009789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.009837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.009865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.659 [2024-11-26 19:25:59.014705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.014780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.659 [2024-11-26 19:25:59.019368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.019414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.019441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.659 [2024-11-26 19:25:59.023927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.023999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.024035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.659 6680.50 IOPS, 835.06 MiB/s [2024-11-26T19:25:59.099Z] [2024-11-26 19:25:59.029358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edfa80) 00:18:00.659 [2024-11-26 19:25:59.029407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.659 [2024-11-26 19:25:59.029434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.659 00:18:00.659 Latency(us) 00:18:00.659 [2024-11-26T19:25:59.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:00.659 nvme0n1 : 2.00 6677.82 834.73 0.00 0.00 2392.77 1995.87 6345.08 00:18:00.659 [2024-11-26T19:25:59.099Z] =================================================================================================================== 00:18:00.659 [2024-11-26T19:25:59.099Z] Total : 6677.82 834.73 0.00 0.00 2392.77 1995.87 6345.08 00:18:00.659 { 00:18:00.659 "results": [ 00:18:00.659 { 00:18:00.659 "job": "nvme0n1", 00:18:00.659 "core_mask": "0x2", 00:18:00.659 "workload": "randread", 00:18:00.659 "status": "finished", 00:18:00.659 "queue_depth": 16, 00:18:00.659 "io_size": 131072, 00:18:00.659 "runtime": 2.0032, 00:18:00.659 "iops": 6677.8154952076675, 00:18:00.659 "mibps": 834.7269369009584, 00:18:00.659 "io_failed": 0, 00:18:00.659 "io_timeout": 0, 00:18:00.659 "avg_latency_us": 2392.7719279360094, 00:18:00.659 "min_latency_us": 1995.8690909090908, 00:18:00.659 "max_latency_us": 6345.076363636364 00:18:00.659 } 00:18:00.659 ], 00:18:00.659 "core_count": 1 00:18:00.659 } 00:18:00.659 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:00.659 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:00.659 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:00.659 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:00.659 | .driver_specific 00:18:00.659 | .nvme_error 00:18:00.659 | .status_code 00:18:00.659 | .command_transient_transport_error' 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 432 > 0 )) 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79996 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79996 ']' 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79996 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.225 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79996 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:01.226 killing process with pid 79996 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79996' 00:18:01.226 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.226 00:18:01.226 Latency(us) 00:18:01.226 [2024-11-26T19:25:59.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.226 [2024-11-26T19:25:59.666Z] =================================================================================================================== 00:18:01.226 [2024-11-26T19:25:59.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79996 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79996 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80044 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80044 /var/tmp/bperf.sock 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80044 ']' 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.226 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.226 [2024-11-26 19:25:59.649501] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:18:01.226 [2024-11-26 19:25:59.649598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80044 ] 00:18:01.484 [2024-11-26 19:25:59.792633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.484 [2024-11-26 19:25:59.834238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.484 [2024-11-26 19:25:59.884808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.741 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.741 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:01.741 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:01.741 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.999 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.258 nvme0n1 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:02.258 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:02.258 Running I/O for 2 seconds... 00:18:02.516 [2024-11-26 19:26:00.698728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fb048 00:18:02.517 [2024-11-26 19:26:00.699970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.713070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fb8b8 00:18:02.517 [2024-11-26 19:26:00.714239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.714319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.727540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fc128 00:18:02.517 [2024-11-26 19:26:00.728675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.728723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.741718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fc998 00:18:02.517 [2024-11-26 19:26:00.742835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.742869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.755442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fd208 00:18:02.517 [2024-11-26 19:26:00.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.756691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.769607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fda78 00:18:02.517 [2024-11-26 19:26:00.774988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.794653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fe2e8 00:18:02.517 [2024-11-26 19:26:00.796580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.796615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.813655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166feb58 00:18:02.517 [2024-11-26 19:26:00.815576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.815723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.840177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fef90 00:18:02.517 [2024-11-26 19:26:00.843416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.843450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.858734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166feb58 00:18:02.517 [2024-11-26 19:26:00.861974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.877406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fe2e8 00:18:02.517 [2024-11-26 19:26:00.880788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.880820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.895330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fda78 00:18:02.517 [2024-11-26 19:26:00.897655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.897687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.909737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fd208 00:18:02.517 [2024-11-26 19:26:00.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.924325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fc998 00:18:02.517 [2024-11-26 19:26:00.926607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.926774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.517 [2024-11-26 19:26:00.939102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fc128 00:18:02.517 [2024-11-26 19:26:00.941623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.517 [2024-11-26 19:26:00.941658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:00.954256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fb8b8 00:18:02.776 [2024-11-26 19:26:00.956893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.776 [2024-11-26 19:26:00.956966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:00.969308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fb048 00:18:02.776 [2024-11-26 19:26:00.971717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.776 [2024-11-26 19:26:00.971752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:00.983921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166fa7d8 00:18:02.776 [2024-11-26 19:26:00.986286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.776 [2024-11-26 19:26:00.986314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:00.998358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f9f68 00:18:02.776 [2024-11-26 19:26:01.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.776 [2024-11-26 19:26:01.000571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:01.012752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f96f8 00:18:02.776 [2024-11-26 19:26:01.014973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.776 [2024-11-26 19:26:01.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.776 [2024-11-26 19:26:01.028475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f8e88 00:18:02.776 [2024-11-26 19:26:01.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.030759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.044621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f8618 00:18:02.777 [2024-11-26 19:26:01.046876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.046948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.060365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f7da8 00:18:02.777 [2024-11-26 19:26:01.062612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.062644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.074796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f7538 00:18:02.777 [2024-11-26 19:26:01.076993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.077025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.089719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f6cc8 00:18:02.777 [2024-11-26 19:26:01.092236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.092268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.105009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f6458 00:18:02.777 [2024-11-26 19:26:01.107046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.107212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.119758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f5be8 00:18:02.777 [2024-11-26 19:26:01.122043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.122071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.134764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f5378 00:18:02.777 [2024-11-26 19:26:01.137015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.137045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.149429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f4b08 00:18:02.777 [2024-11-26 19:26:01.151497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.151678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.165274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f4298 00:18:02.777 [2024-11-26 19:26:01.167442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.167645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.180857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f3a28 00:18:02.777 [2024-11-26 19:26:01.183154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.183359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.196291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f31b8 00:18:02.777 [2024-11-26 19:26:01.198437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.777 [2024-11-26 19:26:01.198598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.777 [2024-11-26 19:26:01.211822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f2948 00:18:02.777 [2024-11-26 19:26:01.214265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.214448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.227922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f20d8 00:18:03.035 [2024-11-26 19:26:01.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.230211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.243165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f1868 00:18:03.035 [2024-11-26 19:26:01.245314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.245490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.258219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f0ff8 00:18:03.035 [2024-11-26 19:26:01.260306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.260493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.274071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f0788 00:18:03.035 [2024-11-26 19:26:01.276238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.276391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.290500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eff18 00:18:03.035 [2024-11-26 19:26:01.292618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.292650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.306163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ef6a8 00:18:03.035 [2024-11-26 19:26:01.308212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.308402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.322020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eee38 00:18:03.035 [2024-11-26 19:26:01.323974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.324007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.337362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ee5c8 00:18:03.035 [2024-11-26 19:26:01.339321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.339354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.353361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166edd58 00:18:03.035 [2024-11-26 19:26:01.355379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.355410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.368880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ed4e8 00:18:03.035 [2024-11-26 19:26:01.370948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.370991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.385050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ecc78 00:18:03.035 [2024-11-26 19:26:01.386856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.400142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ec408 00:18:03.035 [2024-11-26 19:26:01.401910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.415372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ebb98 00:18:03.035 [2024-11-26 19:26:01.417294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.417340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.430527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eb328 00:18:03.035 [2024-11-26 19:26:01.432362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.432394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.445631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eaab8 00:18:03.035 [2024-11-26 19:26:01.447599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.447630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.035 [2024-11-26 19:26:01.461018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ea248 00:18:03.035 [2024-11-26 19:26:01.462808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.035 [2024-11-26 19:26:01.462840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.476750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e99d8 00:18:03.294 [2024-11-26 19:26:01.478598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.478629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.492285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e9168 00:18:03.294 [2024-11-26 19:26:01.494033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.494065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.507254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e88f8 00:18:03.294 [2024-11-26 19:26:01.508872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.508928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.521970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e8088 00:18:03.294 [2024-11-26 19:26:01.523639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.523674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.536851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e7818 00:18:03.294 [2024-11-26 19:26:01.538573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.538606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.551553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e6fa8 00:18:03.294 [2024-11-26 19:26:01.553218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.553248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.566534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e6738 00:18:03.294 [2024-11-26 19:26:01.568295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.568326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.581352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e5ec8 00:18:03.294 [2024-11-26 19:26:01.583039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.583225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.596412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e5658 00:18:03.294 [2024-11-26 19:26:01.598084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.598117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.612112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e4de8 00:18:03.294 [2024-11-26 19:26:01.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.613858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.627148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e4578 00:18:03.294 [2024-11-26 19:26:01.628869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.642243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e3d08 00:18:03.294 [2024-11-26 19:26:01.643811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.643875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.658220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e3498 00:18:03.294 [2024-11-26 19:26:01.659691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.659840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.673620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e2c28 00:18:03.294 [2024-11-26 19:26:01.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.675128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.294 16067.00 IOPS, 62.76 MiB/s [2024-11-26T19:26:01.734Z] [2024-11-26 19:26:01.688846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e23b8 00:18:03.294 [2024-11-26 19:26:01.690289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.690323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.703760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e1b48 00:18:03.294 [2024-11-26 19:26:01.705373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.705407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.294 [2024-11-26 19:26:01.719247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e12d8 00:18:03.294 [2024-11-26 19:26:01.720677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.294 [2024-11-26 19:26:01.720709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.734807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e0a68 00:18:03.554 [2024-11-26 19:26:01.736331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.736364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.749681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e01f8 00:18:03.554 [2024-11-26 19:26:01.751254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.751282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.765540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166df988 00:18:03.554 [2024-11-26 19:26:01.767147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.782505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166df118 00:18:03.554 [2024-11-26 19:26:01.784039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.784071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.798608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166de8a8 00:18:03.554 [2024-11-26 19:26:01.800039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.800086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.814788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166de038 00:18:03.554 [2024-11-26 19:26:01.816240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.816271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.837608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166de038 00:18:03.554 [2024-11-26 19:26:01.840241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.840273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.853756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166de8a8 00:18:03.554 [2024-11-26 19:26:01.856462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.856490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.870290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166df118 00:18:03.554 [2024-11-26 19:26:01.872949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.873155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.886187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166df988 00:18:03.554 [2024-11-26 19:26:01.889013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.889180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.902382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e01f8 00:18:03.554 [2024-11-26 19:26:01.905194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.905366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.919404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e0a68 00:18:03.554 [2024-11-26 19:26:01.922178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.922356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.935769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e12d8 00:18:03.554 [2024-11-26 19:26:01.938451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.938622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.951923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e1b48 00:18:03.554 [2024-11-26 19:26:01.954479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.954649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.967941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e23b8 00:18:03.554 [2024-11-26 19:26:01.970403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.970571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.554 [2024-11-26 19:26:01.983757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e2c28 00:18:03.554 [2024-11-26 19:26:01.986174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.554 [2024-11-26 19:26:01.986344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.000095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e3498 00:18:03.812 [2024-11-26 19:26:02.002641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.002811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.016082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e3d08 00:18:03.812 [2024-11-26 19:26:02.018678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.018833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.031800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e4578 00:18:03.812 [2024-11-26 19:26:02.034150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.034302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.047991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e4de8 00:18:03.812 [2024-11-26 19:26:02.050339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.050389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.064771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e5658 00:18:03.812 [2024-11-26 19:26:02.067156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.067190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.081188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e5ec8 00:18:03.812 [2024-11-26 19:26:02.083581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.083617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.096701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e6738 00:18:03.812 [2024-11-26 19:26:02.098887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.098961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.111555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e6fa8 00:18:03.812 [2024-11-26 19:26:02.113781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.113813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.127386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e7818 00:18:03.812 [2024-11-26 19:26:02.129576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.129604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.144144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e8088 00:18:03.812 [2024-11-26 19:26:02.146283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.146318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.160676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e88f8 00:18:03.812 [2024-11-26 19:26:02.162805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.176166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e9168 00:18:03.812 [2024-11-26 19:26:02.178300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.178331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.191017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166e99d8 00:18:03.812 [2024-11-26 19:26:02.193029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.193061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.205870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ea248 00:18:03.812 [2024-11-26 19:26:02.207903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.207963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.220768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eaab8 00:18:03.812 [2024-11-26 19:26:02.222955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.222986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.812 [2024-11-26 19:26:02.235423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eb328 00:18:03.812 [2024-11-26 19:26:02.237511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.812 [2024-11-26 19:26:02.237538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.251035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ebb98 00:18:04.070 [2024-11-26 19:26:02.253187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.253220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.266006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ec408 00:18:04.070 [2024-11-26 19:26:02.268411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.268443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.281395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ecc78 00:18:04.070 [2024-11-26 19:26:02.283434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.283464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.296621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ed4e8 00:18:04.070 [2024-11-26 19:26:02.298644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.298671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.311762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166edd58 00:18:04.070 [2024-11-26 19:26:02.313938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.313966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.326623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ee5c8 00:18:04.070 [2024-11-26 19:26:02.328570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.328601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.341450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eee38 00:18:04.070 [2024-11-26 19:26:02.343253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.343451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.356633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166ef6a8 00:18:04.070 [2024-11-26 19:26:02.358460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.358493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.371245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166eff18 00:18:04.070 [2024-11-26 19:26:02.373487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.373519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.387537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f0788 00:18:04.070 [2024-11-26 19:26:02.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.389543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.402777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f0ff8 00:18:04.070 [2024-11-26 19:26:02.404649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.404698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.418151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f1868 00:18:04.070 [2024-11-26 19:26:02.419939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.420012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.433208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f20d8 00:18:04.070 [2024-11-26 19:26:02.435395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.070 [2024-11-26 19:26:02.435443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:04.070 [2024-11-26 19:26:02.448520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f2948 00:18:04.070 [2024-11-26 19:26:02.450515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.071 [2024-11-26 19:26:02.450548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.071 [2024-11-26 19:26:02.463462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f31b8 00:18:04.071 [2024-11-26 19:26:02.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.071 [2024-11-26 19:26:02.465260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:04.071 [2024-11-26 19:26:02.478815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f3a28 00:18:04.071 [2024-11-26 19:26:02.480658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.071 [2024-11-26 19:26:02.480690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:04.071 [2024-11-26 19:26:02.494128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f4298 00:18:04.071 [2024-11-26 19:26:02.496228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.071 [2024-11-26 19:26:02.496261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.510282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f4b08 00:18:04.329 [2024-11-26 19:26:02.512245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.512280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.525531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f5378 00:18:04.329 [2024-11-26 19:26:02.527295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.527328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.540895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f5be8 00:18:04.329 [2024-11-26 19:26:02.542577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.542609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.556214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f6458 00:18:04.329 [2024-11-26 19:26:02.557904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.557964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.571673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f6cc8 00:18:04.329 [2024-11-26 19:26:02.573397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.573430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.586967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f7538 00:18:04.329 [2024-11-26 19:26:02.588678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.588727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.602448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f7da8 00:18:04.329 [2024-11-26 19:26:02.604293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.604324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.618175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f8618 00:18:04.329 [2024-11-26 19:26:02.619995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.620042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.634996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f8e88 00:18:04.329 [2024-11-26 19:26:02.636609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.636639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.651489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f96f8 00:18:04.329 [2024-11-26 19:26:02.653021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.653066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:04.329 [2024-11-26 19:26:02.667186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166f9f68 00:18:04.329 [2024-11-26 19:26:02.668722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.668767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:04.329 16130.00 IOPS, 63.01 MiB/s [2024-11-26T19:26:02.769Z] [2024-11-26 19:26:02.684862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b2aae0) with pdu=0x2000166de8a8 00:18:04.329 [2024-11-26 19:26:02.685066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.329 [2024-11-26 19:26:02.685119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:04.329 00:18:04.329 Latency(us) 00:18:04.329 [2024-11-26T19:26:02.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.329 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.329 nvme0n1 : 2.01 16140.62 63.05 0.00 0.00 7914.81 4170.47 35746.91 00:18:04.329 [2024-11-26T19:26:02.769Z] =================================================================================================================== 00:18:04.329 [2024-11-26T19:26:02.769Z] Total : 16140.62 63.05 0.00 0.00 7914.81 4170.47 35746.91 00:18:04.329 { 00:18:04.329 "results": [ 00:18:04.329 { 00:18:04.329 "job": "nvme0n1", 00:18:04.329 "core_mask": "0x2", 00:18:04.329 "workload": "randwrite", 00:18:04.329 "status": "finished", 00:18:04.329 "queue_depth": 128, 00:18:04.329 "io_size": 4096, 00:18:04.329 "runtime": 2.007915, 00:18:04.329 "iops": 16140.623482567738, 00:18:04.329 "mibps": 63.04931047878023, 00:18:04.329 "io_failed": 0, 00:18:04.329 "io_timeout": 0, 00:18:04.329 "avg_latency_us": 7914.806300157925, 00:18:04.329 "min_latency_us": 4170.472727272727, 00:18:04.329 "max_latency_us": 35746.90909090909 00:18:04.329 } 00:18:04.329 ], 00:18:04.329 "core_count": 1 00:18:04.329 } 00:18:04.329 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:04.329 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:04.329 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:04.329 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:04.329 | .driver_specific 00:18:04.329 | .nvme_error 00:18:04.329 | .status_code 00:18:04.329 | .command_transient_transport_error' 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80044 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80044 ']' 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80044 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80044 00:18:04.587 killing process with pid 80044 00:18:04.587 Received shutdown signal, test time was about 2.000000 seconds 00:18:04.587 00:18:04.587 Latency(us) 00:18:04.587 [2024-11-26T19:26:03.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.587 [2024-11-26T19:26:03.027Z] =================================================================================================================== 00:18:04.587 [2024-11-26T19:26:03.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80044' 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80044 00:18:04.587 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80044 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80097 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80097 /var/tmp/bperf.sock 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80097 ']' 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:04.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.844 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.844 [2024-11-26 19:26:03.247929] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:18:04.844 [2024-11-26 19:26:03.248016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80097 ] 00:18:04.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:04.844 Zero copy mechanism will not be used. 00:18:05.102 [2024-11-26 19:26:03.388259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.102 [2024-11-26 19:26:03.434620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.102 [2024-11-26 19:26:03.486713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.361 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.361 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:05.361 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:05.361 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:05.626 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:05.627 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.627 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.627 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.627 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:05.627 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:05.898 nvme0n1 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:05.898 19:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:06.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:06.157 Zero copy mechanism will not be used. 00:18:06.157 Running I/O for 2 seconds... 00:18:06.157 [2024-11-26 19:26:04.413825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.413947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.413975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.419336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.419444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.419467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.424721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.424816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.424838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.429848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.429945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.429967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.434764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.434853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.434873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.439854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.439999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.440020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.444940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.445046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.445067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.449941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.450047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.157 [2024-11-26 19:26:04.450068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.157 [2024-11-26 19:26:04.454842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.157 [2024-11-26 19:26:04.454966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.454988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.459972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.460048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.460069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.465011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.465094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.465114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.470041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.470133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.470153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.474998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.475086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.475106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.480011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.480106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.480127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.485005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.485100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.485120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.490075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.490166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.490187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.495653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.495723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.495746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.500865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.500996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.505884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.505983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.506003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.510842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.510980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.511000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.516008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.516098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.516119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.520961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.521064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.521083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.526078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.526165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.526186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.531049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.531130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.531150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.536135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.536243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.541159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.541261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.541281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.546287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.546373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.546394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.551155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.551251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.551271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.556326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.556413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.561250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.561352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.561371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.566283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.566379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.566400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.571202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.571291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.576405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.576521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.576542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.581250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.581359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.586389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.586500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.586520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.158 [2024-11-26 19:26:04.591351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.158 [2024-11-26 19:26:04.591445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.158 [2024-11-26 19:26:04.591482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.596706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.596832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.596852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.601810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.601906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.601938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.606860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.606982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.611726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.611795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.611816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.616838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.616968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.616989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.621793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.621893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.621941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.626804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.626894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.626915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.631675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.631751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.631772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.636610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.636715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.641572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.641673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.641693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.646432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.646530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.646550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.651390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.651491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.651538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.656441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.656515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.656534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.661369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.661456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.418 [2024-11-26 19:26:04.661476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.418 [2024-11-26 19:26:04.666360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.418 [2024-11-26 19:26:04.666464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.666485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.671187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.671289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.671309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.676512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.676615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.676636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.681688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.681803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.681824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.686909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.687024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.687044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.691949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.692033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.692084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.696957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.697056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.697075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.701842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.701953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.701972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.706739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.706841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.706877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.711890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.712001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.712021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.716752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.716841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.716861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.721684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.721778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.721799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.726608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.726716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.726735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.731688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.731781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.736723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.736810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.736830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.741678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.741779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.741799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.746664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.746748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.746768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.752062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.752159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.752180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.757373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.757470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.757490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.762251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.762355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.767318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.767414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.767435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.772326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.772413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.772433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.777354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.777460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.782238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.782324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.782343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.787162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.787248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.787267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.792218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.792288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.792308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.797185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.797272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.419 [2024-11-26 19:26:04.797292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.419 [2024-11-26 19:26:04.802082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.419 [2024-11-26 19:26:04.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.802208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.807087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.807177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.807197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.812205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.812293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.812313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.817035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.817136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.817156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.821869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.822003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.826739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.826831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.826852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.831647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.831722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.831742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.836668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.836768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.836787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.841634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.841722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.841743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.846449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.846537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.846557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.420 [2024-11-26 19:26:04.851606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.420 [2024-11-26 19:26:04.851696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.420 [2024-11-26 19:26:04.851717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.856977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.857064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.857085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.862241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.862335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.862354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.867143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.867231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.867251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.872255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.872343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.872362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.877171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.877258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.882255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.882328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.882348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.887162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.887249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.887269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.892202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.892286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.892306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.897107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.897212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.902049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.902139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.902159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.907030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.680 [2024-11-26 19:26:04.907126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.680 [2024-11-26 19:26:04.907146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.680 [2024-11-26 19:26:04.912033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.912121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.912141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.916992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.917090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.917109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.921881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.926867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.926981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.927001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.931688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.931780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.931801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.936852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.936964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.936984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.941706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.941803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.946723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.946826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.946846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.951582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.951657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.951676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.956680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.956769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.956789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.961486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.961575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.961595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.966471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.966579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.966599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.971321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.971423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.971443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.976376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.976474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.976494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.981159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.981259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.981278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.986415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.986491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.986510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.991121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.991208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.991227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:04.996285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:04.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:04.996391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.001183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.001284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.001303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.006251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.006340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.006360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.011610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.011709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.016779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.016881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.016902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.021681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.021786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.026658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.026775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.026795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.031492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.031612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.031634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.036597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.036697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.036719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.041525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.041626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.041646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.046458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.046544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.051557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.051630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.681 [2024-11-26 19:26:05.051651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.681 [2024-11-26 19:26:05.056560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.681 [2024-11-26 19:26:05.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.061565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.061667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.061687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.066478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.066577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.071476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.071629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.071650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.076453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.076538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.076558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.081311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.081427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.081448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.086321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.086407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.086428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.091227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.091327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.091348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.096243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.096360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.096380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.101182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.101271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.101292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.106162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.106261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.106281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.682 [2024-11-26 19:26:05.111121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.682 [2024-11-26 19:26:05.111227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.682 [2024-11-26 19:26:05.111247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.116758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.116844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.116866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.122299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.122399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.122421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.127568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.127646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.127679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.133137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.133232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.133255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.138535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.138638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.138658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.143966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.144072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.144094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.149399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.942 [2024-11-26 19:26:05.149521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.942 [2024-11-26 19:26:05.154816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.942 [2024-11-26 19:26:05.154935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.154957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.160167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.160270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.160289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.165450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.165538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.165558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.170653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.170767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.175727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.175816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.175836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.180736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.180834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.180854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.185826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.185923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.185944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.190812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.190918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.190938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.196143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.196254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.201351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.201438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.201460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.206695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.206767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.206789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.211682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.211758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.211795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.216795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.216884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.216905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.221671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.221777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.226636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.226727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.226747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.231562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.231652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.231672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.236511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.236607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.236627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.241563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.241665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.241685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.246614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.246723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.246743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.251617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.251706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.251726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.256723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.256822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.261659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.261777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.261796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.267023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.267093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.267113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.272388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.272519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.277430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.277517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.277537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.282443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.282555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.282575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.943 [2024-11-26 19:26:05.287581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.943 [2024-11-26 19:26:05.287659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.943 [2024-11-26 19:26:05.287680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.292622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.292709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.292729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.297601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.297688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.297707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.302618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.302711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.302731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.307715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.307785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.307805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.312728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.312830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.317746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.317846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.317865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.322630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.322724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.322744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.327734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.327830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.327849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.332614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.332691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.332712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.337593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.337680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.337701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.342529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.342629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.342649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.347553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.352481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.352579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.352599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.357440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.357556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.362431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.362539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.367395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.367497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.367546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.372425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.372532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.372551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:06.944 [2024-11-26 19:26:05.377901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:06.944 [2024-11-26 19:26:05.378014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.944 [2024-11-26 19:26:05.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.383180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.383270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.383290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.388349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.388435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.388455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.393261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.393349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.393369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.398221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.398302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.398336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.403064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.403169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.403190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.204 6091.00 IOPS, 761.38 MiB/s [2024-11-26T19:26:05.644Z] [2024-11-26 19:26:05.409428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.409528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.409549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.414492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.414593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.419776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.419864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.425059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.425135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.425157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.430403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.430490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.430511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.435791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.435888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.435924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.440966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.441059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.204 [2024-11-26 19:26:05.441081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.204 [2024-11-26 19:26:05.446410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.204 [2024-11-26 19:26:05.446494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.446516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.451766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.451877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.451925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.456954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.457065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.457086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.462212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.462316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.462336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.467218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.467298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.467320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.472349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.472440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.472461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.477470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.477589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.477610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.482613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.482702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.482724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.487483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.487615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.487636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.492711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.492810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.492831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.497919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.498006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.498041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.503125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.508392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.508491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.508512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.513756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.513850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.513872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.518747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.518834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.518854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.524007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.524090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.529371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.529455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.529476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.534405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.534496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.534519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.539499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.539629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.544680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.544769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.544789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.549827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.549924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.549946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.554822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.554920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.554941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.560002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.560072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.560093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.565199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.565270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.565290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.570215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.570308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.570328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.575471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.575589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.575612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.580675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.580765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.205 [2024-11-26 19:26:05.580785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.205 [2024-11-26 19:26:05.585800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.205 [2024-11-26 19:26:05.585890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.591078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.591158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.591179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.596220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.596312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.596333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.601455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.601562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.601583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.606746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.606829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.606851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.611951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.612034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.612056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.617088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.617174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.617194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.622129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.622220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.622241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.627206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.627322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.627342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.632352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.632441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.632461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.206 [2024-11-26 19:26:05.637743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.206 [2024-11-26 19:26:05.637860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.206 [2024-11-26 19:26:05.637881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.643084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.643214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.643252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.648677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.648762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.648782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.653887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.654052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.654074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.659051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.659137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.659158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.664272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.664391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.664412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.669458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.669567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.674697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.674801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.674821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.679764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.679900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.679920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.684994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.685074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.689934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.690040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.694859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.694995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.700206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.700306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.700325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.705295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.705397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.710471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.710593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.710613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.466 [2024-11-26 19:26:05.715701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.466 [2024-11-26 19:26:05.715779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.466 [2024-11-26 19:26:05.715801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.720910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.721025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.721045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.726068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.726170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.731160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.731249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.736371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.736462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.736484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.741422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.741509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.741529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.746472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.746580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.746601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.751568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.751646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.751668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.756774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.756876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.756896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.761961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.762061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.762081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.766991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.767098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.767118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.772332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.772421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.772441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.777435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.777540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.777562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.782563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.782703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.788082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.788155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.788175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.793215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.793303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.793324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.798208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.798295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.798315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.803211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.803325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.803346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.808253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.808326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.808345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.813310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.813399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.818384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.818492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.818514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.823565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.823644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.467 [2024-11-26 19:26:05.823666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.467 [2024-11-26 19:26:05.828739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.467 [2024-11-26 19:26:05.828871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.828891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.833964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.834054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.834075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.839008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.839113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.839149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.844151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.844267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.849171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.849258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.849276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.854389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.854478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.854498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.859382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.859469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.859489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.864682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.864795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.864816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.869698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.869798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.869818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.874771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.874860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.874880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.879650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.879758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.884697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.884786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.884807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.889878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.889969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.889990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.895024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.895115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.895135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.468 [2024-11-26 19:26:05.900356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.468 [2024-11-26 19:26:05.900493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.468 [2024-11-26 19:26:05.900515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.905690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.905796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.905815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.911189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.911279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.911300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.916358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.916448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.916468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.921517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.921612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.921632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.926718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.926807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.931956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.932058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.937080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.937174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.937197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.942075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.942151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.942171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.947356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.947472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.947492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.952602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.952717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.952739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.957753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.957843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.957863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.962938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.963025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.963045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.967968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.729 [2024-11-26 19:26:05.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.729 [2024-11-26 19:26:05.968079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.729 [2024-11-26 19:26:05.973037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.973144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:05.978193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.978265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.978286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:05.983259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.983356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.983378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:05.988392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.988480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.988500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:05.993558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.993650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.993671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:05.998684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:05.998774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:05.998794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.003818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.003929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.003963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.008884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.009003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.009022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.013994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.014086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.014123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.019155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.019244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.019265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.024417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.024506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.024527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.029461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.029564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.029600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.034492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.034595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.034615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.039557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.039634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.039655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.044830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.044960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.044981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.049834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.049946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.049966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.055030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.055120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.055141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.060327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.060415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.060435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.065305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.065425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.065446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.070531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.070631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.070652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.075844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.075947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.075984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.081060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.081153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.730 [2024-11-26 19:26:06.081189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.730 [2024-11-26 19:26:06.086188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.730 [2024-11-26 19:26:06.086283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.086303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.091430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.091546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.091568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.096720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.096823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.096842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.101886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.102016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.102037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.107024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.107114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.107135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.112187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.112275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.112295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.117333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.117422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.117442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.122426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.122527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.122547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.127597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.127690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.127712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.132730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.132835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.138055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.138163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.138185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.143432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.148747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.148847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.148870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.154035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.154133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.154155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.159336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.159439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.159460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.731 [2024-11-26 19:26:06.164870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.731 [2024-11-26 19:26:06.165000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.731 [2024-11-26 19:26:06.165023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.170255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.170340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.175696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.175782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.175804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.181018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.181111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.181133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.186357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.186491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.191576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.191669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.191691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.196815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.196955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.201906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.201998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.202019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.207011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.207090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.207110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.212195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.212293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.212314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.217439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.217540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.217560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.222438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.222565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.222585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.227718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.227805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.227826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.232743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.232831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.232850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.237829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.237920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.237942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.242904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.243034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.243055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.248130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.248202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.248223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.253109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.253199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.253235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.992 [2024-11-26 19:26:06.258159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.992 [2024-11-26 19:26:06.258254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.992 [2024-11-26 19:26:06.258281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.263353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.263443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.263464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.268603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.268727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.273749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.273853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.273874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.278763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.278852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.278873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.283923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.284043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.284064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.289008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.289103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.289123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.294148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.294248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.294268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.299110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.299200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.299235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.304290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.304390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.304410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.309502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.309604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.309624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.314665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.314753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.314773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.319893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.320045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.320064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.324970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.325073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.325093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.330349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.330426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.335896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.336042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.341007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.341108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.341127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.346163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.346277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.346299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.351278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.351366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.351386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.356409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.356502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.356523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.361594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.361698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.361718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.366719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.366804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.371900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.372016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.372037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.376959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.377063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.377083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.381970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.382061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.382081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.387117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.387215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.392328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.993 [2024-11-26 19:26:06.392440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.993 [2024-11-26 19:26:06.397417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.993 [2024-11-26 19:26:06.397520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.994 [2024-11-26 19:26:06.397540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.994 [2024-11-26 19:26:06.402491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.994 [2024-11-26 19:26:06.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.994 [2024-11-26 19:26:06.402601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.994 6039.00 IOPS, 754.88 MiB/s [2024-11-26T19:26:06.434Z] [2024-11-26 19:26:06.408270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b175b0) with pdu=0x2000166ff3c8 00:18:07.994 [2024-11-26 19:26:06.408344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.994 [2024-11-26 19:26:06.408365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.994 00:18:07.994 Latency(us) 00:18:07.994 [2024-11-26T19:26:06.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.994 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:07.994 nvme0n1 : 2.00 6036.79 754.60 0.00 0.00 2644.60 2204.39 11319.85 00:18:07.994 [2024-11-26T19:26:06.434Z] =================================================================================================================== 00:18:07.994 [2024-11-26T19:26:06.434Z] Total : 6036.79 754.60 0.00 0.00 2644.60 2204.39 11319.85 00:18:07.994 { 00:18:07.994 "results": [ 00:18:07.994 { 00:18:07.994 "job": "nvme0n1", 00:18:07.994 "core_mask": "0x2", 00:18:07.994 "workload": "randwrite", 00:18:07.994 "status": "finished", 00:18:07.994 "queue_depth": 16, 00:18:07.994 "io_size": 131072, 00:18:07.994 "runtime": 2.003216, 00:18:07.994 "iops": 6036.792837117914, 00:18:07.994 "mibps": 754.5991046397393, 00:18:07.994 "io_failed": 0, 00:18:07.994 "io_timeout": 0, 00:18:07.994 "avg_latency_us": 2644.5977973733866, 00:18:07.994 "min_latency_us": 2204.3927272727274, 00:18:07.994 "max_latency_us": 11319.854545454546 00:18:07.994 } 00:18:07.994 ], 00:18:07.994 "core_count": 1 00:18:07.994 } 00:18:08.253 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:08.253 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:08.253 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:08.253 | .driver_specific 00:18:08.253 | .nvme_error 00:18:08.253 | .status_code 00:18:08.253 | .command_transient_transport_error' 00:18:08.253 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80097 ']' 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:08.512 killing process with pid 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80097' 00:18:08.512 Received shutdown signal, test time was about 2.000000 seconds 00:18:08.512 00:18:08.512 Latency(us) 00:18:08.512 [2024-11-26T19:26:06.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.512 [2024-11-26T19:26:06.952Z] =================================================================================================================== 00:18:08.512 [2024-11-26T19:26:06.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80097 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79920 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79920 ']' 00:18:08.512 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79920 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79920 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.771 killing process with pid 79920 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79920' 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79920 00:18:08.771 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79920 00:18:08.771 00:18:08.771 real 0m15.180s 00:18:08.771 user 0m28.849s 00:18:08.771 sys 0m4.997s 00:18:08.771 ************************************ 00:18:08.771 END TEST nvmf_digest_error 00:18:08.771 ************************************ 00:18:08.771 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.771 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.030 rmmod nvme_tcp 00:18:09.030 rmmod nvme_fabrics 00:18:09.030 rmmod nvme_keyring 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79920 ']' 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79920 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79920 ']' 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79920 00:18:09.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79920) - No such process 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79920 is not found' 00:18:09.030 Process with pid 79920 is not found 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:09.030 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:09.289 00:18:09.289 real 0m32.206s 00:18:09.289 user 0m58.821s 00:18:09.289 sys 0m10.937s 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.289 ************************************ 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:09.289 END TEST nvmf_digest 00:18:09.289 ************************************ 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.289 ************************************ 00:18:09.289 START TEST nvmf_host_multipath 00:18:09.289 ************************************ 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:09.289 * Looking for test storage... 00:18:09.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.289 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.549 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.549 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.549 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.550 --rc genhtml_branch_coverage=1 00:18:09.550 --rc genhtml_function_coverage=1 00:18:09.550 --rc genhtml_legend=1 00:18:09.550 --rc geninfo_all_blocks=1 00:18:09.550 --rc geninfo_unexecuted_blocks=1 00:18:09.550 00:18:09.550 ' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.550 --rc genhtml_branch_coverage=1 00:18:09.550 --rc genhtml_function_coverage=1 00:18:09.550 --rc genhtml_legend=1 00:18:09.550 --rc geninfo_all_blocks=1 00:18:09.550 --rc geninfo_unexecuted_blocks=1 00:18:09.550 00:18:09.550 ' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.550 --rc genhtml_branch_coverage=1 00:18:09.550 --rc genhtml_function_coverage=1 00:18:09.550 --rc genhtml_legend=1 00:18:09.550 --rc geninfo_all_blocks=1 00:18:09.550 --rc geninfo_unexecuted_blocks=1 00:18:09.550 00:18:09.550 ' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.550 --rc genhtml_branch_coverage=1 00:18:09.550 --rc genhtml_function_coverage=1 00:18:09.550 --rc genhtml_legend=1 00:18:09.550 --rc geninfo_all_blocks=1 00:18:09.550 --rc geninfo_unexecuted_blocks=1 00:18:09.550 00:18:09.550 ' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.550 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.550 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:09.551 Cannot find device "nvmf_init_br" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:09.551 Cannot find device "nvmf_init_br2" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:09.551 Cannot find device "nvmf_tgt_br" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.551 Cannot find device "nvmf_tgt_br2" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:09.551 Cannot find device "nvmf_init_br" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:09.551 Cannot find device "nvmf_init_br2" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:09.551 Cannot find device "nvmf_tgt_br" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:09.551 Cannot find device "nvmf_tgt_br2" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:09.551 Cannot find device "nvmf_br" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:09.551 Cannot find device "nvmf_init_if" 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:09.551 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:09.551 Cannot find device "nvmf_init_if2" 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:09.810 19:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:09.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:09.810 00:18:09.810 --- 10.0.0.3 ping statistics --- 00:18:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.810 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:09.810 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:09.810 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:18:09.810 00:18:09.810 --- 10.0.0.4 ping statistics --- 00:18:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.810 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:09.810 00:18:09.810 --- 10.0.0.1 ping statistics --- 00:18:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.810 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:09.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:18:09.810 00:18:09.810 --- 10.0.0.2 ping statistics --- 00:18:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.810 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80419 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80419 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80419 ']' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.810 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:10.069 [2024-11-26 19:26:08.280848] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:18:10.069 [2024-11-26 19:26:08.280956] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.069 [2024-11-26 19:26:08.430840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:10.069 [2024-11-26 19:26:08.485902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.069 [2024-11-26 19:26:08.486004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.069 [2024-11-26 19:26:08.486016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.069 [2024-11-26 19:26:08.486025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.069 [2024-11-26 19:26:08.486032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.069 [2024-11-26 19:26:08.487124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.069 [2024-11-26 19:26:08.487139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.328 [2024-11-26 19:26:08.542746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80419 00:18:10.328 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:10.585 [2024-11-26 19:26:08.958700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.585 19:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:11.153 Malloc0 00:18:11.153 19:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:11.153 19:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.721 19:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:11.721 [2024-11-26 19:26:10.088029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:11.721 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:11.980 [2024-11-26 19:26:10.348153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80466 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80466 /var/tmp/bdevperf.sock 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80466 ']' 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.980 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:12.547 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.547 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:12.547 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:12.806 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:13.065 Nvme0n1 00:18:13.065 19:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:13.324 Nvme0n1 00:18:13.324 19:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:13.324 19:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.261 19:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:14.261 19:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:14.827 19:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:14.827 19:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:14.827 19:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:14.827 19:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80499 00:18:14.827 19:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.415 Attaching 4 probes... 00:18:21.415 @path[10.0.0.3, 4421]: 18512 00:18:21.415 @path[10.0.0.3, 4421]: 18883 00:18:21.415 @path[10.0.0.3, 4421]: 18950 00:18:21.415 @path[10.0.0.3, 4421]: 18936 00:18:21.415 @path[10.0.0.3, 4421]: 18829 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80499 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:21.415 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:21.983 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:21.983 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80618 00:18:21.983 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.983 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.545 Attaching 4 probes... 00:18:28.545 @path[10.0.0.3, 4420]: 18217 00:18:28.545 @path[10.0.0.3, 4420]: 18477 00:18:28.545 @path[10.0.0.3, 4420]: 18457 00:18:28.545 @path[10.0.0.3, 4420]: 19340 00:18:28.545 @path[10.0.0.3, 4420]: 18939 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80618 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:28.545 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:28.804 19:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:28.804 19:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80732 00:18:28.804 19:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:28.804 19:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.426 Attaching 4 probes... 00:18:35.426 @path[10.0.0.3, 4421]: 13496 00:18:35.426 @path[10.0.0.3, 4421]: 18569 00:18:35.426 @path[10.0.0.3, 4421]: 17409 00:18:35.426 @path[10.0.0.3, 4421]: 17276 00:18:35.426 @path[10.0.0.3, 4421]: 17924 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80732 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:35.426 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:35.684 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:35.684 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80849 00:18:35.684 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.684 19:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:42.244 19:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:42.244 19:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.244 Attaching 4 probes... 00:18:42.244 00:18:42.244 00:18:42.244 00:18:42.244 00:18:42.244 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80849 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:42.244 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:42.503 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:42.503 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80961 00:18:42.503 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:42.503 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.115 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.115 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.115 Attaching 4 probes... 00:18:49.115 @path[10.0.0.3, 4421]: 17295 00:18:49.115 @path[10.0.0.3, 4421]: 17928 00:18:49.115 @path[10.0.0.3, 4421]: 17908 00:18:49.115 @path[10.0.0.3, 4421]: 18111 00:18:49.115 @path[10.0.0.3, 4421]: 18076 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80961 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:49.115 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:50.050 19:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:50.050 19:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81085 00:18:50.050 19:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:50.050 19:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:56.610 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.611 Attaching 4 probes... 00:18:56.611 @path[10.0.0.3, 4420]: 16062 00:18:56.611 @path[10.0.0.3, 4420]: 17099 00:18:56.611 @path[10.0.0.3, 4420]: 17440 00:18:56.611 @path[10.0.0.3, 4420]: 17225 00:18:56.611 @path[10.0.0.3, 4420]: 16899 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81085 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:56.611 [2024-11-26 19:26:54.973932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:56.611 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:56.870 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:03.431 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:03.431 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81259 00:19:03.431 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80419 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:03.431 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.014 Attaching 4 probes... 00:19:10.014 @path[10.0.0.3, 4421]: 17508 00:19:10.014 @path[10.0.0.3, 4421]: 17733 00:19:10.014 @path[10.0.0.3, 4421]: 18234 00:19:10.014 @path[10.0.0.3, 4421]: 17427 00:19:10.014 @path[10.0.0.3, 4421]: 18080 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81259 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80466 ']' 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:10.014 killing process with pid 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80466' 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80466 00:19:10.014 { 00:19:10.014 "results": [ 00:19:10.014 { 00:19:10.014 "job": "Nvme0n1", 00:19:10.014 "core_mask": "0x4", 00:19:10.014 "workload": "verify", 00:19:10.014 "status": "terminated", 00:19:10.014 "verify_range": { 00:19:10.014 "start": 0, 00:19:10.014 "length": 16384 00:19:10.014 }, 00:19:10.014 "queue_depth": 128, 00:19:10.014 "io_size": 4096, 00:19:10.014 "runtime": 55.887188, 00:19:10.014 "iops": 7709.244558878146, 00:19:10.014 "mibps": 30.114236558117756, 00:19:10.014 "io_failed": 0, 00:19:10.014 "io_timeout": 0, 00:19:10.014 "avg_latency_us": 16570.942382109868, 00:19:10.014 "min_latency_us": 886.2254545454546, 00:19:10.014 "max_latency_us": 7015926.69090909 00:19:10.014 } 00:19:10.014 ], 00:19:10.014 "core_count": 1 00:19:10.014 } 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80466 00:19:10.014 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:10.014 [2024-11-26 19:26:10.416369] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:10.014 [2024-11-26 19:26:10.416469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80466 ] 00:19:10.014 [2024-11-26 19:26:10.568048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.014 [2024-11-26 19:26:10.625964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.014 [2024-11-26 19:26:10.683107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.014 Running I/O for 90 seconds... 00:19:10.014 9278.00 IOPS, 36.24 MiB/s [2024-11-26T19:27:08.454Z] 9472.50 IOPS, 37.00 MiB/s [2024-11-26T19:27:08.454Z] 9469.67 IOPS, 36.99 MiB/s [2024-11-26T19:27:08.454Z] 9460.25 IOPS, 36.95 MiB/s [2024-11-26T19:27:08.454Z] 9467.40 IOPS, 36.98 MiB/s [2024-11-26T19:27:08.454Z] 9468.17 IOPS, 36.99 MiB/s [2024-11-26T19:27:08.454Z] 9460.71 IOPS, 36.96 MiB/s [2024-11-26T19:27:08.454Z] 9461.12 IOPS, 36.96 MiB/s [2024-11-26T19:27:08.454Z] [2024-11-26 19:26:20.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.014 [2024-11-26 19:26:20.098450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.014 [2024-11-26 19:26:20.098486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.014 [2024-11-26 19:26:20.098542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.014 [2024-11-26 19:26:20.098564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.014 [2024-11-26 19:26:20.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.098600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.098635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.098669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.098684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.098703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.098718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.098738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.100599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.100980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.101507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.015 [2024-11-26 19:26:20.101522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.103444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.103478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.015 [2024-11-26 19:26:20.103531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.015 [2024-11-26 19:26:20.103551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.103780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.103818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.103885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.103948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.103986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.104741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.104968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.104984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.105020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.016 [2024-11-26 19:26:20.105056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.016 [2024-11-26 19:26:20.105090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.016 [2024-11-26 19:26:20.105109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.105354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.105370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.106380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.106974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.017 [2024-11-26 19:26:20.106990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.017 [2024-11-26 19:26:20.107239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.017 [2024-11-26 19:26:20.107260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:20.107275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.018 9408.89 IOPS, 36.75 MiB/s [2024-11-26T19:27:08.458Z] 9396.60 IOPS, 36.71 MiB/s [2024-11-26T19:27:08.458Z] 9380.18 IOPS, 36.64 MiB/s [2024-11-26T19:27:08.458Z] 9375.67 IOPS, 36.62 MiB/s [2024-11-26T19:27:08.458Z] 9390.15 IOPS, 36.68 MiB/s [2024-11-26T19:27:08.458Z] 9401.14 IOPS, 36.72 MiB/s [2024-11-26T19:27:08.458Z] [2024-11-26 19:26:26.705876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.705939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.705990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.018 [2024-11-26 19:26:26.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.706959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.706991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.707008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.707036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.707052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.707072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.707087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.707107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.018 [2024-11-26 19:26:26.707122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.018 [2024-11-26 19:26:26.707142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.707157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.707192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.707867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.707914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.707977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.707999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.019 [2024-11-26 19:26:26.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.019 [2024-11-26 19:26:26.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.019 [2024-11-26 19:26:26.708761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.708798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.708819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.708835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.708872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.708892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.708914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.708930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.708966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.708984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.709382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.709974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.709991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.710034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.020 [2024-11-26 19:26:26.710071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.710107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.710143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.710179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.020 [2024-11-26 19:26:26.710214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.020 [2024-11-26 19:26:26.710235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.021 [2024-11-26 19:26:26.710250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.710271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.021 [2024-11-26 19:26:26.710286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.710307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.021 [2024-11-26 19:26:26.710322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.021 [2024-11-26 19:26:26.711576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.711962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.711984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.712920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.712936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.713368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.713394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.713422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.713439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.713461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.021 [2024-11-26 19:26:26.713492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.021 [2024-11-26 19:26:26.713528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.021 [2024-11-26 19:26:26.713543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.713964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.713990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.022 [2024-11-26 19:26:26.714711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.022 [2024-11-26 19:26:26.714904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.022 [2024-11-26 19:26:26.714929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.714944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.714965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.714979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.715274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.715289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.727762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.727815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.727871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.727948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.727983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.728005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.728057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.728110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.728163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.023 [2024-11-26 19:26:26.728233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.728287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.023 [2024-11-26 19:26:26.728358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.023 [2024-11-26 19:26:26.728390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.728954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.728976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.024 [2024-11-26 19:26:26.729709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.729964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.729987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.730017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.730039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.730070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.730091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.730132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.730158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.024 [2024-11-26 19:26:26.730210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.024 [2024-11-26 19:26:26.730240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.730741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.730763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.733950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.733981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.734003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.025 [2024-11-26 19:26:26.734057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.025 [2024-11-26 19:26:26.734795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.025 [2024-11-26 19:26:26.734817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.734847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.734869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.734913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.734951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.734982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.735851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.735918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.735951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.735974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.026 [2024-11-26 19:26:26.736310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.736943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.026 [2024-11-26 19:26:26.736984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.026 [2024-11-26 19:26:26.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.737470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.737947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.737986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.027 [2024-11-26 19:26:26.738952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.738991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.739023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.027 [2024-11-26 19:26:26.739053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.027 [2024-11-26 19:26:26.739075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.028 [2024-11-26 19:26:26.739409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.739969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.739992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.740321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.740337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.742969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.743008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.743025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.743047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.743099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.028 [2024-11-26 19:26:26.743115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.028 [2024-11-26 19:26:26.743137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.743153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.743191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.743230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.743268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.743943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.743965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.743981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.029 [2024-11-26 19:26:26.744607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.744645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.744683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.029 [2024-11-26 19:26:26.744721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.029 [2024-11-26 19:26:26.744742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.744758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.744780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.744832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.744847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.744868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.744898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.744951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.744967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.745797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.745835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.745872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.745911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.745959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.746032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.746070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.746108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.030 [2024-11-26 19:26:26.746146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.030 [2024-11-26 19:26:26.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.030 [2024-11-26 19:26:26.746417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.746889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.746925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.746994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.747010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.747046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.747081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.747124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.031 [2024-11-26 19:26:26.747160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.747854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.747870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.031 [2024-11-26 19:26:26.748300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.031 [2024-11-26 19:26:26.748326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.031 9346.13 IOPS, 36.51 MiB/s [2024-11-26T19:27:08.471Z] 8778.44 IOPS, 34.29 MiB/s [2024-11-26T19:27:08.471Z] 8806.53 IOPS, 34.40 MiB/s [2024-11-26T19:27:08.471Z] 8832.83 IOPS, 34.50 MiB/s [2024-11-26T19:27:08.471Z] 8815.11 IOPS, 34.43 MiB/s [2024-11-26T19:27:08.471Z] 8814.35 IOPS, 34.43 MiB/s [2024-11-26T19:27:08.471Z] 8822.81 IOPS, 34.46 MiB/s [2024-11-26T19:27:08.471Z] 8824.68 IOPS, 34.47 MiB/s [2024-11-26T19:27:08.472Z] [2024-11-26 19:26:33.889230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.889644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.889962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.889979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.032 [2024-11-26 19:26:33.890306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:10.032 [2024-11-26 19:26:33.890681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.032 [2024-11-26 19:26:33.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.890965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.890982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.033 [2024-11-26 19:26:33.891906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.891980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.891998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:10.033 [2024-11-26 19:26:33.892252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.033 [2024-11-26 19:26:33.892270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.892848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.892973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.892990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.034 [2024-11-26 19:26:33.893499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.893748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.893764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.034 [2024-11-26 19:26:33.894467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.034 [2024-11-26 19:26:33.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.894970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.894999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:33.895280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:33.895296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:10.035 8482.74 IOPS, 33.14 MiB/s [2024-11-26T19:27:08.475Z] 8129.29 IOPS, 31.76 MiB/s [2024-11-26T19:27:08.475Z] 7804.12 IOPS, 30.48 MiB/s [2024-11-26T19:27:08.475Z] 7503.96 IOPS, 29.31 MiB/s [2024-11-26T19:27:08.475Z] 7226.04 IOPS, 28.23 MiB/s [2024-11-26T19:27:08.475Z] 6967.96 IOPS, 27.22 MiB/s [2024-11-26T19:27:08.475Z] 6727.69 IOPS, 26.28 MiB/s [2024-11-26T19:27:08.475Z] 6758.47 IOPS, 26.40 MiB/s [2024-11-26T19:27:08.475Z] 6824.32 IOPS, 26.66 MiB/s [2024-11-26T19:27:08.475Z] 6894.06 IOPS, 26.93 MiB/s [2024-11-26T19:27:08.475Z] 6956.67 IOPS, 27.17 MiB/s [2024-11-26T19:27:08.475Z] 7017.00 IOPS, 27.41 MiB/s [2024-11-26T19:27:08.475Z] 7075.03 IOPS, 27.64 MiB/s [2024-11-26T19:27:08.475Z] [2024-11-26 19:26:47.308514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.308946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.308962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.035 [2024-11-26 19:26:47.309217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.035 [2024-11-26 19:26:47.309251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.035 [2024-11-26 19:26:47.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.035 [2024-11-26 19:26:47.309321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.035 [2024-11-26 19:26:47.309370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.035 [2024-11-26 19:26:47.309390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.309886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.309957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.309976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.036 [2024-11-26 19:26:47.310421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.036 [2024-11-26 19:26:47.310655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.036 [2024-11-26 19:26:47.310669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.310965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.310978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.037 [2024-11-26 19:26:47.311858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.311945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.037 [2024-11-26 19:26:47.311987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.037 [2024-11-26 19:26:47.312008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.038 [2024-11-26 19:26:47.312269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.038 [2024-11-26 19:26:47.312724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd5310 is same with the state(6) to be set 00:19:10.038 [2024-11-26 19:26:47.312771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.312781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.312809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53256 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.312823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.312849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.312866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53776 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.312882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.312906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.312917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53784 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.312931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.312960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.312985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53792 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.313034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.313044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53800 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.313114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.313124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53808 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.313172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.313182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53816 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.313222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.313232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53824 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.313271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.038 [2024-11-26 19:26:47.313282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.038 [2024-11-26 19:26:47.313293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53832 len:8 PRP1 0x0 PRP2 0x0 00:19:10.038 [2024-11-26 19:26:47.313307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.038 [2024-11-26 19:26:47.314639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:10.039 [2024-11-26 19:26:47.314734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.039 [2024-11-26 19:26:47.314758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.039 [2024-11-26 19:26:47.314790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc461e0 (9): Bad file descriptor 00:19:10.039 [2024-11-26 19:26:47.315318] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.039 [2024-11-26 19:26:47.315352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc461e0 with addr=10.0.0.3, port=4421 00:19:10.039 [2024-11-26 19:26:47.315370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc461e0 is same with the state(6) to be set 00:19:10.039 [2024-11-26 19:26:47.315453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc461e0 (9): Bad file descriptor 00:19:10.039 [2024-11-26 19:26:47.315488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:10.039 [2024-11-26 19:26:47.315534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:10.039 [2024-11-26 19:26:47.315550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:10.039 [2024-11-26 19:26:47.315564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:10.039 [2024-11-26 19:26:47.315579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:10.039 7118.75 IOPS, 27.81 MiB/s [2024-11-26T19:27:08.479Z] 7156.62 IOPS, 27.96 MiB/s [2024-11-26T19:27:08.479Z] 7180.71 IOPS, 28.05 MiB/s [2024-11-26T19:27:08.479Z] 7215.67 IOPS, 28.19 MiB/s [2024-11-26T19:27:08.479Z] 7252.88 IOPS, 28.33 MiB/s [2024-11-26T19:27:08.479Z] 7286.71 IOPS, 28.46 MiB/s [2024-11-26T19:27:08.479Z] 7314.36 IOPS, 28.57 MiB/s [2024-11-26T19:27:08.479Z] 7346.30 IOPS, 28.70 MiB/s [2024-11-26T19:27:08.479Z] 7382.07 IOPS, 28.84 MiB/s [2024-11-26T19:27:08.479Z] 7416.96 IOPS, 28.97 MiB/s [2024-11-26T19:27:08.479Z] [2024-11-26 19:26:57.379236] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:10.039 7450.09 IOPS, 29.10 MiB/s [2024-11-26T19:27:08.479Z] 7482.04 IOPS, 29.23 MiB/s [2024-11-26T19:27:08.479Z] 7511.08 IOPS, 29.34 MiB/s [2024-11-26T19:27:08.479Z] 7539.76 IOPS, 29.45 MiB/s [2024-11-26T19:27:08.479Z] 7571.20 IOPS, 29.57 MiB/s [2024-11-26T19:27:08.479Z] 7596.31 IOPS, 29.67 MiB/s [2024-11-26T19:27:08.479Z] 7619.69 IOPS, 29.76 MiB/s [2024-11-26T19:27:08.479Z] 7649.36 IOPS, 29.88 MiB/s [2024-11-26T19:27:08.479Z] 7668.30 IOPS, 29.95 MiB/s [2024-11-26T19:27:08.479Z] 7693.75 IOPS, 30.05 MiB/s [2024-11-26T19:27:08.479Z] Received shutdown signal, test time was about 55.887988 seconds 00:19:10.039 00:19:10.039 Latency(us) 00:19:10.039 [2024-11-26T19:27:08.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.039 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:10.039 Verification LBA range: start 0x0 length 0x4000 00:19:10.039 Nvme0n1 : 55.89 7709.24 30.11 0.00 0.00 16570.94 886.23 7015926.69 00:19:10.039 [2024-11-26T19:27:08.479Z] =================================================================================================================== 00:19:10.039 [2024-11-26T19:27:08.479Z] Total : 7709.24 30.11 0.00 0.00 16570.94 886.23 7015926.69 00:19:10.039 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.039 rmmod nvme_tcp 00:19:10.039 rmmod nvme_fabrics 00:19:10.039 rmmod nvme_keyring 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80419 ']' 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80419 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80419 ']' 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80419 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80419 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.039 killing process with pid 80419 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80419' 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80419 00:19:10.039 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80419 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:10.298 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:10.299 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:10.557 00:19:10.557 real 1m1.182s 00:19:10.557 user 2m49.294s 00:19:10.557 sys 0m18.702s 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:10.557 ************************************ 00:19:10.557 END TEST nvmf_host_multipath 00:19:10.557 ************************************ 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.557 19:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.558 19:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.558 ************************************ 00:19:10.558 START TEST nvmf_timeout 00:19:10.558 ************************************ 00:19:10.558 19:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:10.558 * Looking for test storage... 00:19:10.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:10.558 19:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.558 19:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.558 19:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.817 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.818 --rc genhtml_branch_coverage=1 00:19:10.818 --rc genhtml_function_coverage=1 00:19:10.818 --rc genhtml_legend=1 00:19:10.818 --rc geninfo_all_blocks=1 00:19:10.818 --rc geninfo_unexecuted_blocks=1 00:19:10.818 00:19:10.818 ' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.818 --rc genhtml_branch_coverage=1 00:19:10.818 --rc genhtml_function_coverage=1 00:19:10.818 --rc genhtml_legend=1 00:19:10.818 --rc geninfo_all_blocks=1 00:19:10.818 --rc geninfo_unexecuted_blocks=1 00:19:10.818 00:19:10.818 ' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.818 --rc genhtml_branch_coverage=1 00:19:10.818 --rc genhtml_function_coverage=1 00:19:10.818 --rc genhtml_legend=1 00:19:10.818 --rc geninfo_all_blocks=1 00:19:10.818 --rc geninfo_unexecuted_blocks=1 00:19:10.818 00:19:10.818 ' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.818 --rc genhtml_branch_coverage=1 00:19:10.818 --rc genhtml_function_coverage=1 00:19:10.818 --rc genhtml_legend=1 00:19:10.818 --rc geninfo_all_blocks=1 00:19:10.818 --rc geninfo_unexecuted_blocks=1 00:19:10.818 00:19:10.818 ' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:10.818 Cannot find device "nvmf_init_br" 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:10.818 Cannot find device "nvmf_init_br2" 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:10.818 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:10.819 Cannot find device "nvmf_tgt_br" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.819 Cannot find device "nvmf_tgt_br2" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:10.819 Cannot find device "nvmf_init_br" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:10.819 Cannot find device "nvmf_init_br2" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:10.819 Cannot find device "nvmf_tgt_br" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:10.819 Cannot find device "nvmf_tgt_br2" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:10.819 Cannot find device "nvmf_br" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:10.819 Cannot find device "nvmf_init_if" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:10.819 Cannot find device "nvmf_init_if2" 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.819 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:11.077 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:11.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:11.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:19:11.078 00:19:11.078 --- 10.0.0.3 ping statistics --- 00:19:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.078 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:11.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:11.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:19:11.078 00:19:11.078 --- 10.0.0.4 ping statistics --- 00:19:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.078 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:11.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:11.078 00:19:11.078 --- 10.0.0.1 ping statistics --- 00:19:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.078 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:11.078 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:11.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:19:11.078 00:19:11.078 --- 10.0.0.2 ping statistics --- 00:19:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.078 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81621 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81621 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81621 ']' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.336 19:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.336 [2024-11-26 19:27:09.609450] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:11.336 [2024-11-26 19:27:09.610121] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.336 [2024-11-26 19:27:09.767888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:11.595 [2024-11-26 19:27:09.836446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.595 [2024-11-26 19:27:09.836534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.595 [2024-11-26 19:27:09.836550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.595 [2024-11-26 19:27:09.836563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.595 [2024-11-26 19:27:09.836574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.595 [2024-11-26 19:27:09.841938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.595 [2024-11-26 19:27:09.841983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.595 [2024-11-26 19:27:09.918911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.532 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.533 19:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:12.791 [2024-11-26 19:27:11.001704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.791 19:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:13.050 Malloc0 00:19:13.050 19:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.309 19:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.567 19:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.825 [2024-11-26 19:27:12.197732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:13.825 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81676 00:19:13.825 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81676 /var/tmp/bdevperf.sock 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81676 ']' 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.826 19:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.084 [2024-11-26 19:27:12.279121] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:14.084 [2024-11-26 19:27:12.279222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81676 ] 00:19:14.084 [2024-11-26 19:27:12.436565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.340 [2024-11-26 19:27:12.524654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.340 [2024-11-26 19:27:12.587353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.906 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.906 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:14.906 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:15.474 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:15.732 NVMe0n1 00:19:15.732 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81700 00:19:15.732 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.732 19:27:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:15.732 Running I/O for 10 seconds... 00:19:16.667 19:27:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:16.930 8039.00 IOPS, 31.40 MiB/s [2024-11-26T19:27:15.370Z] [2024-11-26 19:27:15.222524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.930 [2024-11-26 19:27:15.222749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.930 [2024-11-26 19:27:15.222759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.222879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.222899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.222964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.222986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.222997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.931 [2024-11-26 19:27:15.223389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.223408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.931 [2024-11-26 19:27:15.223436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.931 [2024-11-26 19:27:15.223445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.223896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.223979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.223988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.224009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.224028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.224048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.932 [2024-11-26 19:27:15.224068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.224087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.932 [2024-11-26 19:27:15.224106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.932 [2024-11-26 19:27:15.224117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.933 [2024-11-26 19:27:15.224478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.933 [2024-11-26 19:27:15.224718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.933 [2024-11-26 19:27:15.224727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.224745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.224764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.224985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.224997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.934 [2024-11-26 19:27:15.225101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.934 [2024-11-26 19:27:15.225260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.934 [2024-11-26 19:27:15.225324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.934 [2024-11-26 19:27:15.225332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:19:16.934 [2024-11-26 19:27:15.225342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.934 [2024-11-26 19:27:15.225631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:16.934 [2024-11-26 19:27:15.225711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9e50 (9): Bad file descriptor 00:19:16.934 [2024-11-26 19:27:15.225820] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.934 [2024-11-26 19:27:15.225845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e9e50 with addr=10.0.0.3, port=4420 00:19:16.934 [2024-11-26 19:27:15.225857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9e50 is same with the state(6) to be set 00:19:16.934 [2024-11-26 19:27:15.225880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9e50 (9): Bad file descriptor 00:19:16.934 [2024-11-26 19:27:15.225925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:16.934 [2024-11-26 19:27:15.225938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:16.934 [2024-11-26 19:27:15.225949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:16.934 [2024-11-26 19:27:15.225964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:16.934 [2024-11-26 19:27:15.225975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:16.934 19:27:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:18.844 4696.50 IOPS, 18.35 MiB/s [2024-11-26T19:27:17.284Z] 3131.00 IOPS, 12.23 MiB/s [2024-11-26T19:27:17.284Z] [2024-11-26 19:27:17.226188] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.844 [2024-11-26 19:27:17.226260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e9e50 with addr=10.0.0.3, port=4420 00:19:18.844 [2024-11-26 19:27:17.226277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9e50 is same with the state(6) to be set 00:19:18.844 [2024-11-26 19:27:17.226304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9e50 (9): Bad file descriptor 00:19:18.844 [2024-11-26 19:27:17.226326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.844 [2024-11-26 19:27:17.226336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.844 [2024-11-26 19:27:17.226347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.844 [2024-11-26 19:27:17.226359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:18.844 [2024-11-26 19:27:17.226370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.844 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:18.844 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:18.844 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:19.410 19:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:21.042 2348.25 IOPS, 9.17 MiB/s [2024-11-26T19:27:19.482Z] 1878.60 IOPS, 7.34 MiB/s [2024-11-26T19:27:19.482Z] [2024-11-26 19:27:19.226673] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.042 [2024-11-26 19:27:19.226762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e9e50 with addr=10.0.0.3, port=4420 00:19:21.042 [2024-11-26 19:27:19.226778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9e50 is same with the state(6) to be set 00:19:21.042 [2024-11-26 19:27:19.226803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9e50 (9): Bad file descriptor 00:19:21.042 [2024-11-26 19:27:19.226822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:21.042 [2024-11-26 19:27:19.226832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:21.042 [2024-11-26 19:27:19.226842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:21.042 [2024-11-26 19:27:19.226852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:21.042 [2024-11-26 19:27:19.226863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:22.913 1565.50 IOPS, 6.12 MiB/s [2024-11-26T19:27:21.353Z] 1341.86 IOPS, 5.24 MiB/s [2024-11-26T19:27:21.353Z] [2024-11-26 19:27:21.227049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:22.913 [2024-11-26 19:27:21.227120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:22.913 [2024-11-26 19:27:21.227133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:22.913 [2024-11-26 19:27:21.227144] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:22.913 [2024-11-26 19:27:21.227158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:23.846 1174.12 IOPS, 4.59 MiB/s 00:19:23.846 Latency(us) 00:19:23.846 [2024-11-26T19:27:22.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.846 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.846 Verification LBA range: start 0x0 length 0x4000 00:19:23.846 NVMe0n1 : 8.14 1153.43 4.51 15.72 0.00 109324.13 3544.90 7015926.69 00:19:23.846 [2024-11-26T19:27:22.286Z] =================================================================================================================== 00:19:23.846 [2024-11-26T19:27:22.286Z] Total : 1153.43 4.51 15.72 0.00 109324.13 3544.90 7015926.69 00:19:23.846 { 00:19:23.846 "results": [ 00:19:23.846 { 00:19:23.846 "job": "NVMe0n1", 00:19:23.846 "core_mask": "0x4", 00:19:23.846 "workload": "verify", 00:19:23.846 "status": "finished", 00:19:23.846 "verify_range": { 00:19:23.846 "start": 0, 00:19:23.846 "length": 16384 00:19:23.846 }, 00:19:23.846 "queue_depth": 128, 00:19:23.846 "io_size": 4096, 00:19:23.846 "runtime": 8.143529, 00:19:23.846 "iops": 1153.4311476019795, 00:19:23.846 "mibps": 4.505590420320233, 00:19:23.846 "io_failed": 128, 00:19:23.846 "io_timeout": 0, 00:19:23.846 "avg_latency_us": 109324.13017272824, 00:19:23.846 "min_latency_us": 3544.9018181818183, 00:19:23.846 "max_latency_us": 7015926.69090909 00:19:23.846 } 00:19:23.846 ], 00:19:23.846 "core_count": 1 00:19:23.846 } 00:19:24.413 19:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:24.413 19:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:24.413 19:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:24.671 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:24.671 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:24.671 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:24.671 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81700 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81676 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81676 ']' 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81676 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81676 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.238 killing process with pid 81676 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81676' 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81676 00:19:25.238 Received shutdown signal, test time was about 9.368004 seconds 00:19:25.238 00:19:25.238 Latency(us) 00:19:25.238 [2024-11-26T19:27:23.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.238 [2024-11-26T19:27:23.678Z] =================================================================================================================== 00:19:25.238 [2024-11-26T19:27:23.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81676 00:19:25.238 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:25.497 [2024-11-26 19:27:23.872750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81822 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81822 /var/tmp/bdevperf.sock 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81822 ']' 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.497 19:27:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 [2024-11-26 19:27:23.950754] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:25.756 [2024-11-26 19:27:23.950874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:19:25.756 [2024-11-26 19:27:24.093280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.756 [2024-11-26 19:27:24.145831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.014 [2024-11-26 19:27:24.199094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:26.014 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.014 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:26.014 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:26.274 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:26.533 NVMe0n1 00:19:26.533 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81834 00:19:26.533 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.533 19:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:26.792 Running I/O for 10 seconds... 00:19:27.729 19:27:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:27.990 6804.00 IOPS, 26.58 MiB/s [2024-11-26T19:27:26.430Z] [2024-11-26 19:27:26.184534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.990 [2024-11-26 19:27:26.184791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.184993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.991 [2024-11-26 19:27:26.185516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14680a0 is same with the state(6) to be set 00:19:27.992 [2024-11-26 19:27:26.185625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.185977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.992 [2024-11-26 19:27:26.186403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.992 [2024-11-26 19:27:26.186414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.186979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.186990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.993 [2024-11-26 19:27:26.187219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.993 [2024-11-26 19:27:26.187231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.994 [2024-11-26 19:27:26.187985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.187997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.994 [2024-11-26 19:27:26.188006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.994 [2024-11-26 19:27:26.188018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.995 [2024-11-26 19:27:26.188291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.995 [2024-11-26 19:27:26.188311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ea50 is same with the state(6) to be set 00:19:27.995 [2024-11-26 19:27:26.188333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.995 [2024-11-26 19:27:26.188341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.995 [2024-11-26 19:27:26.188353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:19:27.995 [2024-11-26 19:27:26.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.995 [2024-11-26 19:27:26.188660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:27.995 [2024-11-26 19:27:26.188746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:27.995 [2024-11-26 19:27:26.188854] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.995 [2024-11-26 19:27:26.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:27.995 [2024-11-26 19:27:26.188885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:27.995 [2024-11-26 19:27:26.188917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:27.995 [2024-11-26 19:27:26.188935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:27.995 [2024-11-26 19:27:26.188944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:27.995 [2024-11-26 19:27:26.188955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:27.995 [2024-11-26 19:27:26.188966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:27.995 [2024-11-26 19:27:26.188976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:27.995 19:27:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:28.930 3986.00 IOPS, 15.57 MiB/s [2024-11-26T19:27:27.370Z] [2024-11-26 19:27:27.189138] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.930 [2024-11-26 19:27:27.189246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:28.930 [2024-11-26 19:27:27.189262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:28.930 [2024-11-26 19:27:27.189288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:28.930 [2024-11-26 19:27:27.189307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:28.930 [2024-11-26 19:27:27.189318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:28.930 [2024-11-26 19:27:27.189329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:28.930 [2024-11-26 19:27:27.189340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:28.930 [2024-11-26 19:27:27.189351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:28.930 19:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:29.189 [2024-11-26 19:27:27.463359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:29.189 19:27:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81834 00:19:30.014 2657.33 IOPS, 10.38 MiB/s [2024-11-26T19:27:28.454Z] [2024-11-26 19:27:28.205323] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:31.956 1993.00 IOPS, 7.79 MiB/s [2024-11-26T19:27:31.331Z] 3067.60 IOPS, 11.98 MiB/s [2024-11-26T19:27:32.268Z] 4079.00 IOPS, 15.93 MiB/s [2024-11-26T19:27:33.202Z] 4812.86 IOPS, 18.80 MiB/s [2024-11-26T19:27:34.137Z] 5356.25 IOPS, 20.92 MiB/s [2024-11-26T19:27:35.074Z] 5787.78 IOPS, 22.61 MiB/s [2024-11-26T19:27:35.074Z] 6139.40 IOPS, 23.98 MiB/s 00:19:36.634 Latency(us) 00:19:36.634 [2024-11-26T19:27:35.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.634 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.634 Verification LBA range: start 0x0 length 0x4000 00:19:36.634 NVMe0n1 : 10.01 6144.85 24.00 0.00 0.00 20798.80 1489.45 3035150.89 00:19:36.634 [2024-11-26T19:27:35.074Z] =================================================================================================================== 00:19:36.634 [2024-11-26T19:27:35.074Z] Total : 6144.85 24.00 0.00 0.00 20798.80 1489.45 3035150.89 00:19:36.634 { 00:19:36.634 "results": [ 00:19:36.634 { 00:19:36.634 "job": "NVMe0n1", 00:19:36.634 "core_mask": "0x4", 00:19:36.634 "workload": "verify", 00:19:36.634 "status": "finished", 00:19:36.634 "verify_range": { 00:19:36.634 "start": 0, 00:19:36.634 "length": 16384 00:19:36.634 }, 00:19:36.634 "queue_depth": 128, 00:19:36.634 "io_size": 4096, 00:19:36.634 "runtime": 10.009352, 00:19:36.634 "iops": 6144.853333162826, 00:19:36.634 "mibps": 24.00333333266729, 00:19:36.634 "io_failed": 0, 00:19:36.634 "io_timeout": 0, 00:19:36.634 "avg_latency_us": 20798.804927708454, 00:19:36.634 "min_latency_us": 1489.4545454545455, 00:19:36.634 "max_latency_us": 3035150.8945454545 00:19:36.634 } 00:19:36.634 ], 00:19:36.634 "core_count": 1 00:19:36.634 } 00:19:36.634 19:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81944 00:19:36.634 19:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.634 19:27:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:36.893 Running I/O for 10 seconds... 00:19:37.829 19:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:38.091 6932.00 IOPS, 27.08 MiB/s [2024-11-26T19:27:36.531Z] [2024-11-26 19:27:36.309642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.309994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.091 [2024-11-26 19:27:36.310121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469230 is same with the state(6) to be set 00:19:38.092 [2024-11-26 19:27:36.310709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.092 [2024-11-26 19:27:36.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.092 [2024-11-26 19:27:36.310958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.310970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.310979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.310990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.310999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.093 [2024-11-26 19:27:36.311813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.093 [2024-11-26 19:27:36.311822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.311986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.094 [2024-11-26 19:27:36.312657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.094 [2024-11-26 19:27:36.312666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.312982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.095 [2024-11-26 19:27:36.313103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.095 [2024-11-26 19:27:36.313338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.095 [2024-11-26 19:27:36.313347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.096 [2024-11-26 19:27:36.313368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.096 [2024-11-26 19:27:36.313388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.096 [2024-11-26 19:27:36.313409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.096 [2024-11-26 19:27:36.313428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152c710 is same with the state(6) to be set 00:19:38.096 [2024-11-26 19:27:36.313450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:38.096 [2024-11-26 19:27:36.313458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:38.096 [2024-11-26 19:27:36.313466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:19:38.096 [2024-11-26 19:27:36.313475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.096 [2024-11-26 19:27:36.313766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:38.096 [2024-11-26 19:27:36.313855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:38.096 [2024-11-26 19:27:36.313994] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.096 [2024-11-26 19:27:36.314026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:38.096 [2024-11-26 19:27:36.314038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:38.096 [2024-11-26 19:27:36.314056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:38.096 [2024-11-26 19:27:36.314071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:38.096 [2024-11-26 19:27:36.314080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:38.096 [2024-11-26 19:27:36.314091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:38.096 [2024-11-26 19:27:36.314102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:38.096 [2024-11-26 19:27:36.314113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:38.096 19:27:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:39.031 3922.00 IOPS, 15.32 MiB/s [2024-11-26T19:27:37.471Z] [2024-11-26 19:27:37.314250] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.031 [2024-11-26 19:27:37.314316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:39.031 [2024-11-26 19:27:37.314333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:39.031 [2024-11-26 19:27:37.314359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:39.031 [2024-11-26 19:27:37.314378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:39.031 [2024-11-26 19:27:37.314387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:39.031 [2024-11-26 19:27:37.314399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:39.031 [2024-11-26 19:27:37.314410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:39.031 [2024-11-26 19:27:37.314422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:39.967 2614.67 IOPS, 10.21 MiB/s [2024-11-26T19:27:38.407Z] [2024-11-26 19:27:38.314569] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.967 [2024-11-26 19:27:38.314638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:39.967 [2024-11-26 19:27:38.314655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:39.967 [2024-11-26 19:27:38.314682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:39.967 [2024-11-26 19:27:38.314701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:39.967 [2024-11-26 19:27:38.314711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:39.967 [2024-11-26 19:27:38.314724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:39.967 [2024-11-26 19:27:38.314746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:39.967 [2024-11-26 19:27:38.314757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.903 1961.00 IOPS, 7.66 MiB/s [2024-11-26T19:27:39.343Z] [2024-11-26 19:27:39.318505] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.904 [2024-11-26 19:27:39.318569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cee50 with addr=10.0.0.3, port=4420 00:19:40.904 [2024-11-26 19:27:39.318587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cee50 is same with the state(6) to be set 00:19:40.904 [2024-11-26 19:27:39.318838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cee50 (9): Bad file descriptor 00:19:40.904 [2024-11-26 19:27:39.319102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:40.904 [2024-11-26 19:27:39.319123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:40.904 [2024-11-26 19:27:39.319135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:40.904 [2024-11-26 19:27:39.319147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:40.904 [2024-11-26 19:27:39.319158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.904 19:27:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.470 [2024-11-26 19:27:39.608870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.470 19:27:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81944 00:19:42.038 1568.80 IOPS, 6.13 MiB/s [2024-11-26T19:27:40.478Z] [2024-11-26 19:27:40.343377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:43.910 2571.17 IOPS, 10.04 MiB/s [2024-11-26T19:27:43.286Z] 3525.29 IOPS, 13.77 MiB/s [2024-11-26T19:27:44.221Z] 4232.38 IOPS, 16.53 MiB/s [2024-11-26T19:27:45.598Z] 4780.22 IOPS, 18.67 MiB/s [2024-11-26T19:27:45.598Z] 5206.80 IOPS, 20.34 MiB/s 00:19:47.158 Latency(us) 00:19:47.158 [2024-11-26T19:27:45.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.158 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.158 Verification LBA range: start 0x0 length 0x4000 00:19:47.158 NVMe0n1 : 10.01 5212.25 20.36 3668.74 0.00 14376.67 711.21 3019898.88 00:19:47.158 [2024-11-26T19:27:45.598Z] =================================================================================================================== 00:19:47.158 [2024-11-26T19:27:45.598Z] Total : 5212.25 20.36 3668.74 0.00 14376.67 0.00 3019898.88 00:19:47.158 { 00:19:47.158 "results": [ 00:19:47.158 { 00:19:47.158 "job": "NVMe0n1", 00:19:47.158 "core_mask": "0x4", 00:19:47.158 "workload": "verify", 00:19:47.158 "status": "finished", 00:19:47.158 "verify_range": { 00:19:47.158 "start": 0, 00:19:47.158 "length": 16384 00:19:47.158 }, 00:19:47.158 "queue_depth": 128, 00:19:47.158 "io_size": 4096, 00:19:47.158 "runtime": 10.008347, 00:19:47.158 "iops": 5212.249335479675, 00:19:47.158 "mibps": 20.36034896671748, 00:19:47.158 "io_failed": 36718, 00:19:47.158 "io_timeout": 0, 00:19:47.158 "avg_latency_us": 14376.668695541892, 00:19:47.158 "min_latency_us": 711.2145454545455, 00:19:47.158 "max_latency_us": 3019898.88 00:19:47.158 } 00:19:47.158 ], 00:19:47.158 "core_count": 1 00:19:47.158 } 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81822 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81822 ']' 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81822 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81822 00:19:47.158 killing process with pid 81822 00:19:47.158 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.158 00:19:47.158 Latency(us) 00:19:47.158 [2024-11-26T19:27:45.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.158 [2024-11-26T19:27:45.598Z] =================================================================================================================== 00:19:47.158 [2024-11-26T19:27:45.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81822' 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81822 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81822 00:19:47.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.158 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82059 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82059 /var/tmp/bdevperf.sock 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82059 ']' 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.159 19:27:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.159 [2024-11-26 19:27:45.564767] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:47.159 [2024-11-26 19:27:45.564926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82059 ] 00:19:47.418 [2024-11-26 19:27:45.711560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.418 [2024-11-26 19:27:45.792474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.676 [2024-11-26 19:27:45.869814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.243 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.243 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:48.243 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82059 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:48.243 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82075 00:19:48.243 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:48.501 19:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:49.067 NVMe0n1 00:19:49.068 19:27:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82116 00:19:49.068 19:27:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.068 19:27:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:49.068 Running I/O for 10 seconds... 00:19:50.001 19:27:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.263 13970.00 IOPS, 54.57 MiB/s [2024-11-26T19:27:48.703Z] [2024-11-26 19:27:48.588695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.588994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.263 [2024-11-26 19:27:48.589225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cac0 is same with the state(6) to be set 00:19:50.264 [2024-11-26 19:27:48.589881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.589950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.589976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.589986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.589996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.590005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.590015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.590033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.590041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.590052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.264 [2024-11-26 19:27:48.590060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.264 [2024-11-26 19:27:48.590070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.265 [2024-11-26 19:27:48.590799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.265 [2024-11-26 19:27:48.590809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.590986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.590995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.266 [2024-11-26 19:27:48.591588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.266 [2024-11-26 19:27:48.591596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.591987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.591998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.267 [2024-11-26 19:27:48.592261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.267 [2024-11-26 19:27:48.592270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.268 [2024-11-26 19:27:48.592400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceeec0 is same with the state(6) to be set 00:19:50.268 [2024-11-26 19:27:48.592420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.268 [2024-11-26 19:27:48.592427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.268 [2024-11-26 19:27:48.592435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107728 len:8 PRP1 0x0 PRP2 0x0 00:19:50.268 [2024-11-26 19:27:48.592443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.268 [2024-11-26 19:27:48.592622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.268 [2024-11-26 19:27:48.592648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.268 [2024-11-26 19:27:48.592667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.268 [2024-11-26 19:27:48.592685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.268 [2024-11-26 19:27:48.592694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e50 is same with the state(6) to be set 00:19:50.268 [2024-11-26 19:27:48.592944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:50.268 [2024-11-26 19:27:48.592972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81e50 (9): Bad file descriptor 00:19:50.268 [2024-11-26 19:27:48.593093] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.268 [2024-11-26 19:27:48.593114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc81e50 with addr=10.0.0.3, port=4420 00:19:50.268 [2024-11-26 19:27:48.593124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e50 is same with the state(6) to be set 00:19:50.268 [2024-11-26 19:27:48.593141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81e50 (9): Bad file descriptor 00:19:50.268 [2024-11-26 19:27:48.593157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:50.268 [2024-11-26 19:27:48.593166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:50.268 [2024-11-26 19:27:48.593176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:50.268 [2024-11-26 19:27:48.593186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:50.268 [2024-11-26 19:27:48.593197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:50.268 19:27:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82116 00:19:52.166 8322.50 IOPS, 32.51 MiB/s [2024-11-26T19:27:50.864Z] 5548.33 IOPS, 21.67 MiB/s [2024-11-26T19:27:50.864Z] [2024-11-26 19:27:50.606266] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.424 [2024-11-26 19:27:50.606377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc81e50 with addr=10.0.0.3, port=4420 00:19:52.424 [2024-11-26 19:27:50.606396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e50 is same with the state(6) to be set 00:19:52.424 [2024-11-26 19:27:50.606430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81e50 (9): Bad file descriptor 00:19:52.424 [2024-11-26 19:27:50.606453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:52.424 [2024-11-26 19:27:50.606464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:52.424 [2024-11-26 19:27:50.606477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:52.424 [2024-11-26 19:27:50.606490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:52.424 [2024-11-26 19:27:50.606503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:54.367 4161.25 IOPS, 16.25 MiB/s [2024-11-26T19:27:52.807Z] 3329.00 IOPS, 13.00 MiB/s [2024-11-26T19:27:52.807Z] [2024-11-26 19:27:52.606779] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.367 [2024-11-26 19:27:52.606869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc81e50 with addr=10.0.0.3, port=4420 00:19:54.367 [2024-11-26 19:27:52.606888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81e50 is same with the state(6) to be set 00:19:54.367 [2024-11-26 19:27:52.606936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81e50 (9): Bad file descriptor 00:19:54.367 [2024-11-26 19:27:52.606959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:54.367 [2024-11-26 19:27:52.606972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:54.367 [2024-11-26 19:27:52.606985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:54.367 [2024-11-26 19:27:52.606999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:54.367 [2024-11-26 19:27:52.607012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:56.235 2774.17 IOPS, 10.84 MiB/s [2024-11-26T19:27:54.675Z] 2377.86 IOPS, 9.29 MiB/s [2024-11-26T19:27:54.675Z] [2024-11-26 19:27:54.607121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:56.235 [2024-11-26 19:27:54.607211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:56.235 [2024-11-26 19:27:54.607229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:56.235 [2024-11-26 19:27:54.607241] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:56.235 [2024-11-26 19:27:54.607255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:57.171 2080.62 IOPS, 8.13 MiB/s 00:19:57.171 Latency(us) 00:19:57.171 [2024-11-26T19:27:55.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.171 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:57.171 NVMe0n1 : 8.20 2030.51 7.93 15.61 0.00 62446.46 1355.40 7015926.69 00:19:57.171 [2024-11-26T19:27:55.611Z] =================================================================================================================== 00:19:57.172 [2024-11-26T19:27:55.612Z] Total : 2030.51 7.93 15.61 0.00 62446.46 1355.40 7015926.69 00:19:57.172 { 00:19:57.172 "results": [ 00:19:57.172 { 00:19:57.172 "job": "NVMe0n1", 00:19:57.172 "core_mask": "0x4", 00:19:57.172 "workload": "randread", 00:19:57.172 "status": "finished", 00:19:57.172 "queue_depth": 128, 00:19:57.172 "io_size": 4096, 00:19:57.172 "runtime": 8.19743, 00:19:57.172 "iops": 2030.5144417213687, 00:19:57.172 "mibps": 7.9316970379740965, 00:19:57.172 "io_failed": 128, 00:19:57.172 "io_timeout": 0, 00:19:57.172 "avg_latency_us": 62446.46318899963, 00:19:57.172 "min_latency_us": 1355.4036363636365, 00:19:57.172 "max_latency_us": 7015926.69090909 00:19:57.172 } 00:19:57.172 ], 00:19:57.172 "core_count": 1 00:19:57.172 } 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.430 Attaching 5 probes... 00:19:57.430 1476.712003: reset bdev controller NVMe0 00:19:57.430 1476.786226: reconnect bdev controller NVMe0 00:19:57.430 3489.826350: reconnect delay bdev controller NVMe0 00:19:57.430 3489.869704: reconnect bdev controller NVMe0 00:19:57.430 5490.351547: reconnect delay bdev controller NVMe0 00:19:57.430 5490.384294: reconnect bdev controller NVMe0 00:19:57.430 7490.852154: reconnect delay bdev controller NVMe0 00:19:57.430 7490.885372: reconnect bdev controller NVMe0 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82075 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82059 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82059 ']' 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82059 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82059 00:19:57.430 killing process with pid 82059 00:19:57.430 Received shutdown signal, test time was about 8.259081 seconds 00:19:57.430 00:19:57.430 Latency(us) 00:19:57.430 [2024-11-26T19:27:55.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.430 [2024-11-26T19:27:55.870Z] =================================================================================================================== 00:19:57.430 [2024-11-26T19:27:55.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82059' 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82059 00:19:57.430 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82059 00:19:57.689 19:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.947 rmmod nvme_tcp 00:19:57.947 rmmod nvme_fabrics 00:19:57.947 rmmod nvme_keyring 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81621 ']' 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81621 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81621 ']' 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81621 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81621 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.947 killing process with pid 81621 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81621' 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81621 00:19:57.947 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81621 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:58.205 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:58.464 00:19:58.464 real 0m47.971s 00:19:58.464 user 2m20.292s 00:19:58.464 sys 0m5.854s 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:58.464 ************************************ 00:19:58.464 END TEST nvmf_timeout 00:19:58.464 ************************************ 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:58.464 00:19:58.464 real 4m59.766s 00:19:58.464 user 13m2.505s 00:19:58.464 sys 1m10.406s 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.464 19:27:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.464 ************************************ 00:19:58.464 END TEST nvmf_host 00:19:58.464 ************************************ 00:19:58.723 19:27:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:58.723 19:27:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:58.723 00:19:58.723 real 12m34.264s 00:19:58.723 user 30m11.478s 00:19:58.723 sys 3m12.260s 00:19:58.723 19:27:56 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.723 19:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:58.723 ************************************ 00:19:58.723 END TEST nvmf_tcp 00:19:58.723 ************************************ 00:19:58.723 19:27:56 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:19:58.723 19:27:56 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:58.723 19:27:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.723 19:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.723 19:27:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.723 ************************************ 00:19:58.723 START TEST nvmf_dif 00:19:58.723 ************************************ 00:19:58.723 19:27:56 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:58.723 * Looking for test storage... 00:19:58.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.723 19:27:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:58.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.723 --rc genhtml_branch_coverage=1 00:19:58.723 --rc genhtml_function_coverage=1 00:19:58.723 --rc genhtml_legend=1 00:19:58.723 --rc geninfo_all_blocks=1 00:19:58.723 --rc geninfo_unexecuted_blocks=1 00:19:58.723 00:19:58.723 ' 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:58.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.723 --rc genhtml_branch_coverage=1 00:19:58.723 --rc genhtml_function_coverage=1 00:19:58.723 --rc genhtml_legend=1 00:19:58.723 --rc geninfo_all_blocks=1 00:19:58.723 --rc geninfo_unexecuted_blocks=1 00:19:58.723 00:19:58.723 ' 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:58.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.723 --rc genhtml_branch_coverage=1 00:19:58.723 --rc genhtml_function_coverage=1 00:19:58.723 --rc genhtml_legend=1 00:19:58.723 --rc geninfo_all_blocks=1 00:19:58.723 --rc geninfo_unexecuted_blocks=1 00:19:58.723 00:19:58.723 ' 00:19:58.723 19:27:57 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:58.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.723 --rc genhtml_branch_coverage=1 00:19:58.723 --rc genhtml_function_coverage=1 00:19:58.723 --rc genhtml_legend=1 00:19:58.723 --rc geninfo_all_blocks=1 00:19:58.723 --rc geninfo_unexecuted_blocks=1 00:19:58.723 00:19:58.723 ' 00:19:58.723 19:27:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.724 19:27:57 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.724 19:27:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.982 19:27:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.982 19:27:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.982 19:27:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.982 19:27:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.982 19:27:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.982 19:27:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.982 19:27:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:58.982 19:27:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.982 19:27:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:58.982 19:27:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:58.982 19:27:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:58.982 19:27:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:58.982 19:27:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.982 19:27:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:58.982 19:27:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:58.982 Cannot find device "nvmf_init_br" 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:58.982 Cannot find device "nvmf_init_br2" 00:19:58.982 19:27:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:58.983 Cannot find device "nvmf_tgt_br" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.983 Cannot find device "nvmf_tgt_br2" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:58.983 Cannot find device "nvmf_init_br" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:58.983 Cannot find device "nvmf_init_br2" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:58.983 Cannot find device "nvmf_tgt_br" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:58.983 Cannot find device "nvmf_tgt_br2" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:58.983 Cannot find device "nvmf_br" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:58.983 Cannot find device "nvmf_init_if" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:58.983 Cannot find device "nvmf_init_if2" 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:58.983 19:27:57 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:59.241 19:27:57 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:59.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:59.241 00:19:59.241 --- 10.0.0.3 ping statistics --- 00:19:59.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.242 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:59.242 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:59.242 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:19:59.242 00:19:59.242 --- 10.0.0.4 ping statistics --- 00:19:59.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.242 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:59.242 00:19:59.242 --- 10.0.0.1 ping statistics --- 00:19:59.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.242 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:59.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:19:59.242 00:19:59.242 --- 10.0.0.2 ping statistics --- 00:19:59.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.242 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:59.242 19:27:57 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:59.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.500 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.500 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.500 19:27:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.500 19:27:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.500 19:27:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.500 19:27:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.500 19:27:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.757 19:27:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.757 19:27:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:59.757 19:27:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:59.757 19:27:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:59.757 19:27:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82622 00:19:59.757 19:27:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.757 19:27:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82622 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82622 ']' 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.757 19:27:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:59.757 [2024-11-26 19:27:58.025442] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:19:59.757 [2024-11-26 19:27:58.025545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.757 [2024-11-26 19:27:58.180548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.014 [2024-11-26 19:27:58.250263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.014 [2024-11-26 19:27:58.250324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.014 [2024-11-26 19:27:58.250338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.014 [2024-11-26 19:27:58.250350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.014 [2024-11-26 19:27:58.250359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.014 [2024-11-26 19:27:58.250834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.014 [2024-11-26 19:27:58.310014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:00.014 19:27:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:00.014 19:27:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.014 19:27:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:00.014 19:27:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:00.014 [2024-11-26 19:27:58.431612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.014 19:27:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.014 19:27:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:00.014 ************************************ 00:20:00.014 START TEST fio_dif_1_default 00:20:00.014 ************************************ 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.014 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 bdev_null0 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 [2024-11-26 19:27:58.475774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:00.272 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.272 { 00:20:00.272 "params": { 00:20:00.272 "name": "Nvme$subsystem", 00:20:00.272 "trtype": "$TEST_TRANSPORT", 00:20:00.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.272 "adrfam": "ipv4", 00:20:00.272 "trsvcid": "$NVMF_PORT", 00:20:00.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.273 "hdgst": ${hdgst:-false}, 00:20:00.273 "ddgst": ${ddgst:-false} 00:20:00.273 }, 00:20:00.273 "method": "bdev_nvme_attach_controller" 00:20:00.273 } 00:20:00.273 EOF 00:20:00.273 )") 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:00.273 "params": { 00:20:00.273 "name": "Nvme0", 00:20:00.273 "trtype": "tcp", 00:20:00.273 "traddr": "10.0.0.3", 00:20:00.273 "adrfam": "ipv4", 00:20:00.273 "trsvcid": "4420", 00:20:00.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:00.273 "hdgst": false, 00:20:00.273 "ddgst": false 00:20:00.273 }, 00:20:00.273 "method": "bdev_nvme_attach_controller" 00:20:00.273 }' 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:00.273 19:27:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:00.530 fio-3.35 00:20:00.530 Starting 1 thread 00:20:12.747 00:20:12.747 filename0: (groupid=0, jobs=1): err= 0: pid=82681: Tue Nov 26 19:28:09 2024 00:20:12.747 read: IOPS=8604, BW=33.6MiB/s (35.2MB/s)(336MiB/10001msec) 00:20:12.747 slat (usec): min=6, max=1198, avg= 8.69, stdev= 5.39 00:20:12.747 clat (usec): min=324, max=2218, avg=439.58, stdev=36.97 00:20:12.747 lat (usec): min=330, max=2253, avg=448.27, stdev=37.91 00:20:12.747 clat percentiles (usec): 00:20:12.747 | 1.00th=[ 359], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:20:12.747 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 441], 00:20:12.747 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 482], 95.00th=[ 498], 00:20:12.747 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[ 668], 00:20:12.747 | 99.99th=[ 1647] 00:20:12.747 bw ( KiB/s): min=32768, max=36096, per=100.00%, avg=34502.32, stdev=981.70, samples=19 00:20:12.747 iops : min= 8192, max= 9024, avg=8625.68, stdev=245.48, samples=19 00:20:12.747 lat (usec) : 500=95.22%, 750=4.75%, 1000=0.01% 00:20:12.747 lat (msec) : 2=0.01%, 4=0.01% 00:20:12.747 cpu : usr=85.29%, sys=12.84%, ctx=28, majf=0, minf=9 00:20:12.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.747 issued rwts: total=86052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.747 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:12.747 00:20:12.747 Run status group 0 (all jobs): 00:20:12.747 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=336MiB (352MB), run=10001-10001msec 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.747 00:20:12.747 real 0m11.094s 00:20:12.747 user 0m9.259s 00:20:12.747 sys 0m1.568s 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 ************************************ 00:20:12.747 END TEST fio_dif_1_default 00:20:12.747 ************************************ 00:20:12.747 19:28:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:12.747 19:28:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:12.747 19:28:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 ************************************ 00:20:12.747 START TEST fio_dif_1_multi_subsystems 00:20:12.747 ************************************ 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 bdev_null0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.747 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.748 [2024-11-26 19:28:09.631828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.748 bdev_null1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.748 { 00:20:12.748 "params": { 00:20:12.748 "name": "Nvme$subsystem", 00:20:12.748 "trtype": "$TEST_TRANSPORT", 00:20:12.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.748 "adrfam": "ipv4", 00:20:12.748 "trsvcid": "$NVMF_PORT", 00:20:12.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.748 "hdgst": ${hdgst:-false}, 00:20:12.748 "ddgst": ${ddgst:-false} 00:20:12.748 }, 00:20:12.748 "method": "bdev_nvme_attach_controller" 00:20:12.748 } 00:20:12.748 EOF 00:20:12.748 )") 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.748 { 00:20:12.748 "params": { 00:20:12.748 "name": "Nvme$subsystem", 00:20:12.748 "trtype": "$TEST_TRANSPORT", 00:20:12.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.748 "adrfam": "ipv4", 00:20:12.748 "trsvcid": "$NVMF_PORT", 00:20:12.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.748 "hdgst": ${hdgst:-false}, 00:20:12.748 "ddgst": ${ddgst:-false} 00:20:12.748 }, 00:20:12.748 "method": "bdev_nvme_attach_controller" 00:20:12.748 } 00:20:12.748 EOF 00:20:12.748 )") 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:12.748 "params": { 00:20:12.748 "name": "Nvme0", 00:20:12.748 "trtype": "tcp", 00:20:12.748 "traddr": "10.0.0.3", 00:20:12.748 "adrfam": "ipv4", 00:20:12.748 "trsvcid": "4420", 00:20:12.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.748 "hdgst": false, 00:20:12.748 "ddgst": false 00:20:12.748 }, 00:20:12.748 "method": "bdev_nvme_attach_controller" 00:20:12.748 },{ 00:20:12.748 "params": { 00:20:12.748 "name": "Nvme1", 00:20:12.748 "trtype": "tcp", 00:20:12.748 "traddr": "10.0.0.3", 00:20:12.748 "adrfam": "ipv4", 00:20:12.748 "trsvcid": "4420", 00:20:12.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.748 "hdgst": false, 00:20:12.748 "ddgst": false 00:20:12.748 }, 00:20:12.748 "method": "bdev_nvme_attach_controller" 00:20:12.748 }' 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.748 19:28:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.748 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:12.748 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:12.748 fio-3.35 00:20:12.748 Starting 2 threads 00:20:22.804 00:20:22.804 filename0: (groupid=0, jobs=1): err= 0: pid=82845: Tue Nov 26 19:28:20 2024 00:20:22.804 read: IOPS=4824, BW=18.8MiB/s (19.8MB/s)(188MiB/10001msec) 00:20:22.804 slat (nsec): min=6439, max=77038, avg=12978.34, stdev=4035.31 00:20:22.804 clat (usec): min=631, max=1457, avg=792.77, stdev=38.72 00:20:22.804 lat (usec): min=644, max=1471, avg=805.75, stdev=39.35 00:20:22.804 clat percentiles (usec): 00:20:22.804 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 766], 00:20:22.804 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:22.804 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:20:22.804 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 979], 99.95th=[ 1012], 00:20:22.804 | 99.99th=[ 1418] 00:20:22.804 bw ( KiB/s): min=18880, max=19808, per=50.05%, avg=19317.89, stdev=246.50, samples=19 00:20:22.804 iops : min= 4720, max= 4952, avg=4829.47, stdev=61.63, samples=19 00:20:22.804 lat (usec) : 750=8.96%, 1000=90.98% 00:20:22.804 lat (msec) : 2=0.06% 00:20:22.804 cpu : usr=89.87%, sys=8.68%, ctx=19, majf=0, minf=0 00:20:22.804 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.804 issued rwts: total=48252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.804 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:22.804 filename1: (groupid=0, jobs=1): err= 0: pid=82846: Tue Nov 26 19:28:20 2024 00:20:22.804 read: IOPS=4824, BW=18.8MiB/s (19.8MB/s)(188MiB/10001msec) 00:20:22.804 slat (usec): min=6, max=126, avg=12.84, stdev= 3.90 00:20:22.804 clat (usec): min=587, max=1477, avg=794.24, stdev=48.68 00:20:22.804 lat (usec): min=594, max=1489, avg=807.09, stdev=49.81 00:20:22.804 clat percentiles (usec): 00:20:22.804 | 1.00th=[ 676], 5.00th=[ 709], 10.00th=[ 725], 20.00th=[ 758], 00:20:22.804 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:20:22.804 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 873], 00:20:22.804 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 996], 99.95th=[ 1037], 00:20:22.804 | 99.99th=[ 1270] 00:20:22.804 bw ( KiB/s): min=18880, max=19808, per=50.05%, avg=19317.89, stdev=246.50, samples=19 00:20:22.804 iops : min= 4720, max= 4952, avg=4829.47, stdev=61.63, samples=19 00:20:22.804 lat (usec) : 750=17.75%, 1000=82.16% 00:20:22.804 lat (msec) : 2=0.09% 00:20:22.804 cpu : usr=89.91%, sys=8.65%, ctx=93, majf=0, minf=0 00:20:22.804 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.804 issued rwts: total=48252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.804 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:22.804 00:20:22.804 Run status group 0 (all jobs): 00:20:22.804 READ: bw=37.7MiB/s (39.5MB/s), 18.8MiB/s-18.8MiB/s (19.8MB/s-19.8MB/s), io=377MiB (395MB), run=10001-10001msec 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.804 00:20:22.804 real 0m11.200s 00:20:22.804 user 0m18.772s 00:20:22.804 sys 0m2.035s 00:20:22.804 ************************************ 00:20:22.804 END TEST fio_dif_1_multi_subsystems 00:20:22.804 ************************************ 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.804 19:28:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.804 19:28:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:22.804 19:28:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.805 19:28:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.805 19:28:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:22.805 ************************************ 00:20:22.805 START TEST fio_dif_rand_params 00:20:22.805 ************************************ 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.805 bdev_null0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.805 [2024-11-26 19:28:20.882924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.805 { 00:20:22.805 "params": { 00:20:22.805 "name": "Nvme$subsystem", 00:20:22.805 "trtype": "$TEST_TRANSPORT", 00:20:22.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.805 "adrfam": "ipv4", 00:20:22.805 "trsvcid": "$NVMF_PORT", 00:20:22.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.805 "hdgst": ${hdgst:-false}, 00:20:22.805 "ddgst": ${ddgst:-false} 00:20:22.805 }, 00:20:22.805 "method": "bdev_nvme_attach_controller" 00:20:22.805 } 00:20:22.805 EOF 00:20:22.805 )") 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:22.805 "params": { 00:20:22.805 "name": "Nvme0", 00:20:22.805 "trtype": "tcp", 00:20:22.805 "traddr": "10.0.0.3", 00:20:22.805 "adrfam": "ipv4", 00:20:22.805 "trsvcid": "4420", 00:20:22.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:22.805 "hdgst": false, 00:20:22.805 "ddgst": false 00:20:22.805 }, 00:20:22.805 "method": "bdev_nvme_attach_controller" 00:20:22.805 }' 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:22.805 19:28:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.805 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:22.805 ... 00:20:22.805 fio-3.35 00:20:22.805 Starting 3 threads 00:20:29.372 00:20:29.372 filename0: (groupid=0, jobs=1): err= 0: pid=83002: Tue Nov 26 19:28:26 2024 00:20:29.372 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5003msec) 00:20:29.372 slat (nsec): min=7383, max=76763, avg=16101.76, stdev=5859.33 00:20:29.372 clat (usec): min=9239, max=12383, avg=11357.81, stdev=290.24 00:20:29.372 lat (usec): min=9256, max=12460, avg=11373.91, stdev=290.82 00:20:29.372 clat percentiles (usec): 00:20:29.372 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[11207], 00:20:29.372 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:29.372 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:20:29.372 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12387], 99.95th=[12387], 00:20:29.372 | 99.99th=[12387] 00:20:29.372 bw ( KiB/s): min=32256, max=34560, per=33.36%, avg=33706.67, stdev=809.54, samples=9 00:20:29.372 iops : min= 252, max= 270, avg=263.33, stdev= 6.32, samples=9 00:20:29.372 lat (msec) : 10=0.23%, 20=99.77% 00:20:29.372 cpu : usr=91.40%, sys=8.08%, ctx=9, majf=0, minf=0 00:20:29.372 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.373 filename0: (groupid=0, jobs=1): err= 0: pid=83003: Tue Nov 26 19:28:26 2024 00:20:29.373 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5004msec) 00:20:29.373 slat (nsec): min=7489, max=76820, avg=16460.23, stdev=5720.31 00:20:29.373 clat (usec): min=9249, max=12383, avg=11356.15, stdev=288.71 00:20:29.373 lat (usec): min=9270, max=12460, avg=11372.61, stdev=289.33 00:20:29.373 clat percentiles (usec): 00:20:29.373 | 1.00th=[10552], 5.00th=[10814], 10.00th=[10945], 20.00th=[11207], 00:20:29.373 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:29.373 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:20:29.373 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12387], 99.95th=[12387], 00:20:29.373 | 99.99th=[12387] 00:20:29.373 bw ( KiB/s): min=32256, max=34560, per=33.36%, avg=33706.67, stdev=809.54, samples=9 00:20:29.373 iops : min= 252, max= 270, avg=263.33, stdev= 6.32, samples=9 00:20:29.373 lat (msec) : 10=0.23%, 20=99.77% 00:20:29.373 cpu : usr=91.94%, sys=7.50%, ctx=15, majf=0, minf=0 00:20:29.373 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.373 filename0: (groupid=0, jobs=1): err= 0: pid=83004: Tue Nov 26 19:28:26 2024 00:20:29.373 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5009msec) 00:20:29.373 slat (usec): min=6, max=517, avg=14.24, stdev=15.19 00:20:29.373 clat (usec): min=4841, max=12269, avg=11346.11, stdev=411.53 00:20:29.373 lat (usec): min=4850, max=12282, avg=11360.35, stdev=412.14 00:20:29.373 clat percentiles (usec): 00:20:29.373 | 1.00th=[10552], 5.00th=[10814], 10.00th=[10945], 20.00th=[11207], 00:20:29.373 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:29.373 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:29.373 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:20:29.373 | 99.99th=[12256] 00:20:29.373 bw ( KiB/s): min=33024, max=35328, per=33.37%, avg=33715.20, stdev=763.72, samples=10 00:20:29.373 iops : min= 258, max= 276, avg=263.40, stdev= 5.97, samples=10 00:20:29.373 lat (msec) : 10=0.23%, 20=99.77% 00:20:29.373 cpu : usr=91.45%, sys=7.93%, ctx=61, majf=0, minf=0 00:20:29.373 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.373 issued rwts: total=1320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.373 00:20:29.373 Run status group 0 (all jobs): 00:20:29.373 READ: bw=98.7MiB/s (103MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=494MiB (518MB), run=5003-5009msec 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 bdev_null0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 [2024-11-26 19:28:26.955072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 bdev_null1 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 bdev_null2 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:29.373 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.374 { 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme$subsystem", 00:20:29.374 "trtype": "$TEST_TRANSPORT", 00:20:29.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "$NVMF_PORT", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.374 "hdgst": ${hdgst:-false}, 00:20:29.374 "ddgst": ${ddgst:-false} 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 } 00:20:29.374 EOF 00:20:29.374 )") 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.374 { 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme$subsystem", 00:20:29.374 "trtype": "$TEST_TRANSPORT", 00:20:29.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "$NVMF_PORT", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.374 "hdgst": ${hdgst:-false}, 00:20:29.374 "ddgst": ${ddgst:-false} 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 } 00:20:29.374 EOF 00:20:29.374 )") 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.374 { 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme$subsystem", 00:20:29.374 "trtype": "$TEST_TRANSPORT", 00:20:29.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "$NVMF_PORT", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.374 "hdgst": ${hdgst:-false}, 00:20:29.374 "ddgst": ${ddgst:-false} 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 } 00:20:29.374 EOF 00:20:29.374 )") 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme0", 00:20:29.374 "trtype": "tcp", 00:20:29.374 "traddr": "10.0.0.3", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "4420", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.374 "hdgst": false, 00:20:29.374 "ddgst": false 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 },{ 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme1", 00:20:29.374 "trtype": "tcp", 00:20:29.374 "traddr": "10.0.0.3", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "4420", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.374 "hdgst": false, 00:20:29.374 "ddgst": false 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 },{ 00:20:29.374 "params": { 00:20:29.374 "name": "Nvme2", 00:20:29.374 "trtype": "tcp", 00:20:29.374 "traddr": "10.0.0.3", 00:20:29.374 "adrfam": "ipv4", 00:20:29.374 "trsvcid": "4420", 00:20:29.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.374 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.374 "hdgst": false, 00:20:29.374 "ddgst": false 00:20:29.374 }, 00:20:29.374 "method": "bdev_nvme_attach_controller" 00:20:29.374 }' 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.374 19:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.374 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.374 ... 00:20:29.374 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.374 ... 00:20:29.374 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.374 ... 00:20:29.374 fio-3.35 00:20:29.374 Starting 24 threads 00:20:41.644 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83101: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=233, BW=933KiB/s (955kB/s)(9372KiB/10047msec) 00:20:41.644 slat (usec): min=6, max=8082, avg=27.27, stdev=234.94 00:20:41.644 clat (msec): min=13, max=136, avg=68.41, stdev=18.20 00:20:41.644 lat (msec): min=13, max=136, avg=68.44, stdev=18.20 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 21], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 52], 00:20:41.644 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:20:41.644 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 96], 00:20:41.644 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:20:41.644 | 99.99th=[ 136] 00:20:41.644 bw ( KiB/s): min= 760, max= 1186, per=4.15%, avg=930.90, stdev=88.14, samples=20 00:20:41.644 iops : min= 190, max= 296, avg=232.70, stdev=21.96, samples=20 00:20:41.644 lat (msec) : 20=1.15%, 50=17.97%, 100=77.72%, 250=3.16% 00:20:41.644 cpu : usr=32.39%, sys=1.64%, ctx=898, majf=0, minf=9 00:20:41.644 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=81.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83102: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=223, BW=895KiB/s (917kB/s)(8988KiB/10037msec) 00:20:41.644 slat (usec): min=7, max=8031, avg=28.57, stdev=229.63 00:20:41.644 clat (msec): min=16, max=147, avg=71.15, stdev=18.00 00:20:41.644 lat (msec): min=16, max=147, avg=71.18, stdev=18.00 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 23], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:41.644 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:41.644 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 90], 95.00th=[ 100], 00:20:41.644 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 142], 00:20:41.644 | 99.99th=[ 148] 00:20:41.644 bw ( KiB/s): min= 728, max= 1277, per=3.99%, avg=894.65, stdev=115.68, samples=20 00:20:41.644 iops : min= 182, max= 319, avg=223.65, stdev=28.88, samples=20 00:20:41.644 lat (msec) : 20=0.71%, 50=12.24%, 100=82.78%, 250=4.27% 00:20:41.644 cpu : usr=39.52%, sys=1.74%, ctx=1293, majf=0, minf=9 00:20:41.644 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 complete : 0=0.0%, 4=89.4%, 8=9.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83103: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=238, BW=952KiB/s (975kB/s)(9556KiB/10036msec) 00:20:41.644 slat (usec): min=7, max=8036, avg=32.54, stdev=285.79 00:20:41.644 clat (msec): min=23, max=129, avg=67.03, stdev=17.50 00:20:41.644 lat (msec): min=23, max=129, avg=67.06, stdev=17.50 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 49], 00:20:41.644 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:41.644 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 95], 00:20:41.644 | 99.00th=[ 116], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 130], 00:20:41.644 | 99.99th=[ 130] 00:20:41.644 bw ( KiB/s): min= 816, max= 1168, per=4.23%, avg=948.90, stdev=83.26, samples=20 00:20:41.644 iops : min= 204, max= 292, avg=237.20, stdev=20.82, samples=20 00:20:41.644 lat (msec) : 50=23.69%, 100=73.38%, 250=2.93% 00:20:41.644 cpu : usr=33.11%, sys=1.42%, ctx=920, majf=0, minf=9 00:20:41.644 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83104: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=244, BW=980KiB/s (1003kB/s)(9800KiB/10003msec) 00:20:41.644 slat (usec): min=3, max=8046, avg=32.49, stdev=323.76 00:20:41.644 clat (msec): min=3, max=125, avg=65.20, stdev=18.46 00:20:41.644 lat (msec): min=3, max=125, avg=65.24, stdev=18.46 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 6], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 48], 00:20:41.644 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:41.644 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 95], 00:20:41.644 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:20:41.644 | 99.99th=[ 126] 00:20:41.644 bw ( KiB/s): min= 816, max= 1080, per=4.26%, avg=954.53, stdev=57.63, samples=19 00:20:41.644 iops : min= 204, max= 270, avg=238.63, stdev=14.41, samples=19 00:20:41.644 lat (msec) : 4=0.12%, 10=1.59%, 20=0.37%, 50=25.14%, 100=70.61% 00:20:41.644 lat (msec) : 250=2.16% 00:20:41.644 cpu : usr=32.81%, sys=1.34%, ctx=893, majf=0, minf=9 00:20:41.644 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 issued rwts: total=2450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83105: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=237, BW=948KiB/s (971kB/s)(9528KiB/10050msec) 00:20:41.644 slat (usec): min=7, max=7608, avg=33.79, stdev=299.74 00:20:41.644 clat (msec): min=14, max=126, avg=67.28, stdev=17.48 00:20:41.644 lat (msec): min=14, max=126, avg=67.32, stdev=17.47 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 20], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 52], 00:20:41.644 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:20:41.644 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 93], 00:20:41.644 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:20:41.644 | 99.99th=[ 127] 00:20:41.644 bw ( KiB/s): min= 792, max= 1389, per=4.22%, avg=946.25, stdev=118.07, samples=20 00:20:41.644 iops : min= 198, max= 347, avg=236.55, stdev=29.47, samples=20 00:20:41.644 lat (msec) : 20=1.34%, 50=16.62%, 100=79.30%, 250=2.73% 00:20:41.644 cpu : usr=40.72%, sys=1.70%, ctx=1345, majf=0, minf=9 00:20:41.644 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.644 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.644 filename0: (groupid=0, jobs=1): err= 0: pid=83106: Tue Nov 26 19:28:38 2024 00:20:41.644 read: IOPS=221, BW=887KiB/s (908kB/s)(8900KiB/10037msec) 00:20:41.644 slat (usec): min=4, max=8054, avg=37.03, stdev=385.23 00:20:41.644 clat (msec): min=36, max=128, avg=71.95, stdev=16.01 00:20:41.644 lat (msec): min=36, max=128, avg=71.98, stdev=16.01 00:20:41.644 clat percentiles (msec): 00:20:41.644 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:41.644 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:20:41.644 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 97], 00:20:41.644 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 129], 00:20:41.645 | 99.99th=[ 129] 00:20:41.645 bw ( KiB/s): min= 752, max= 992, per=3.94%, avg=883.60, stdev=71.67, samples=20 00:20:41.645 iops : min= 188, max= 248, avg=220.90, stdev=17.92, samples=20 00:20:41.645 lat (msec) : 50=14.11%, 100=82.11%, 250=3.78% 00:20:41.645 cpu : usr=32.04%, sys=1.34%, ctx=904, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=89.2%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename0: (groupid=0, jobs=1): err= 0: pid=83107: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=260, BW=1041KiB/s (1066kB/s)(10.2MiB/10002msec) 00:20:41.645 slat (usec): min=7, max=4107, avg=21.67, stdev=114.77 00:20:41.645 clat (usec): min=1003, max=161134, avg=61404.54, stdev=25985.77 00:20:41.645 lat (usec): min=1011, max=161155, avg=61426.21, stdev=25989.81 00:20:41.645 clat percentiles (usec): 00:20:41.645 | 1.00th=[ 1467], 5.00th=[ 1598], 10.00th=[ 16188], 20.00th=[ 47973], 00:20:41.645 | 30.00th=[ 51119], 40.00th=[ 56361], 50.00th=[ 66323], 60.00th=[ 71828], 00:20:41.645 | 70.00th=[ 74974], 80.00th=[ 80217], 90.00th=[ 86508], 95.00th=[ 94897], 00:20:41.645 | 99.00th=[120062], 99.50th=[141558], 99.90th=[141558], 99.95th=[160433], 00:20:41.645 | 99.99th=[160433] 00:20:41.645 bw ( KiB/s): min= 656, max= 1024, per=4.16%, avg=931.37, stdev=94.41, samples=19 00:20:41.645 iops : min= 164, max= 256, avg=232.84, stdev=23.60, samples=19 00:20:41.645 lat (msec) : 2=6.03%, 4=1.61%, 10=2.19%, 20=0.27%, 50=17.95% 00:20:41.645 lat (msec) : 100=68.26%, 250=3.69% 00:20:41.645 cpu : usr=41.67%, sys=1.75%, ctx=1281, majf=0, minf=9 00:20:41.645 IO depths : 1=0.3%, 2=1.5%, 4=4.9%, 8=78.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename0: (groupid=0, jobs=1): err= 0: pid=83108: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=237, BW=950KiB/s (972kB/s)(9552KiB/10058msec) 00:20:41.645 slat (usec): min=4, max=9024, avg=28.92, stdev=338.08 00:20:41.645 clat (usec): min=1714, max=158821, avg=67200.45, stdev=22317.45 00:20:41.645 lat (usec): min=1721, max=158837, avg=67229.37, stdev=22323.85 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 46], 20.00th=[ 57], 00:20:41.645 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:41.645 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 96], 00:20:41.645 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 134], 00:20:41.645 | 99.99th=[ 159] 00:20:41.645 bw ( KiB/s): min= 728, max= 2158, per=4.23%, avg=947.90, stdev=291.99, samples=20 00:20:41.645 iops : min= 182, max= 539, avg=236.95, stdev=72.89, samples=20 00:20:41.645 lat (msec) : 2=0.67%, 4=0.67%, 10=2.60%, 20=2.68%, 50=11.43% 00:20:41.645 lat (msec) : 100=78.39%, 250=3.56% 00:20:41.645 cpu : usr=33.13%, sys=1.42%, ctx=899, majf=0, minf=0 00:20:41.645 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=78.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename1: (groupid=0, jobs=1): err= 0: pid=83109: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=224, BW=899KiB/s (921kB/s)(9020KiB/10032msec) 00:20:41.645 slat (usec): min=6, max=8078, avg=38.32, stdev=378.04 00:20:41.645 clat (msec): min=22, max=120, avg=70.88, stdev=17.50 00:20:41.645 lat (msec): min=22, max=120, avg=70.92, stdev=17.50 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 25], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:41.645 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:41.645 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 101], 00:20:41.645 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 122], 00:20:41.645 | 99.99th=[ 122] 00:20:41.645 bw ( KiB/s): min= 653, max= 1136, per=4.01%, avg=897.85, stdev=104.93, samples=20 00:20:41.645 iops : min= 163, max= 284, avg=224.45, stdev=26.26, samples=20 00:20:41.645 lat (msec) : 50=15.48%, 100=80.09%, 250=4.43% 00:20:41.645 cpu : usr=32.81%, sys=1.38%, ctx=922, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename1: (groupid=0, jobs=1): err= 0: pid=83110: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=246, BW=985KiB/s (1009kB/s)(9880KiB/10030msec) 00:20:41.645 slat (usec): min=3, max=4051, avg=19.12, stdev=81.92 00:20:41.645 clat (msec): min=11, max=125, avg=64.82, stdev=17.56 00:20:41.645 lat (msec): min=11, max=125, avg=64.84, stdev=17.56 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:20:41.645 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:20:41.645 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 93], 00:20:41.645 | 99.00th=[ 110], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:20:41.645 | 99.99th=[ 127] 00:20:41.645 bw ( KiB/s): min= 848, max= 1338, per=4.39%, avg=984.50, stdev=101.18, samples=20 00:20:41.645 iops : min= 212, max= 334, avg=246.10, stdev=25.20, samples=20 00:20:41.645 lat (msec) : 20=0.40%, 50=27.00%, 100=70.12%, 250=2.47% 00:20:41.645 cpu : usr=35.01%, sys=1.47%, ctx=1111, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename1: (groupid=0, jobs=1): err= 0: pid=83111: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=233, BW=935KiB/s (957kB/s)(9380KiB/10033msec) 00:20:41.645 slat (usec): min=4, max=8028, avg=26.76, stdev=279.50 00:20:41.645 clat (msec): min=14, max=129, avg=68.30, stdev=17.02 00:20:41.645 lat (msec): min=14, max=129, avg=68.33, stdev=17.02 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:41.645 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:20:41.645 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 95], 00:20:41.645 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:20:41.645 | 99.99th=[ 130] 00:20:41.645 bw ( KiB/s): min= 848, max= 1152, per=4.17%, avg=933.70, stdev=69.70, samples=20 00:20:41.645 iops : min= 212, max= 288, avg=233.40, stdev=17.42, samples=20 00:20:41.645 lat (msec) : 20=0.09%, 50=17.74%, 100=79.28%, 250=2.90% 00:20:41.645 cpu : usr=37.14%, sys=1.76%, ctx=1218, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename1: (groupid=0, jobs=1): err= 0: pid=83112: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=232, BW=930KiB/s (952kB/s)(9352KiB/10061msec) 00:20:41.645 slat (usec): min=5, max=8033, avg=26.16, stdev=219.58 00:20:41.645 clat (msec): min=4, max=152, avg=68.63, stdev=21.61 00:20:41.645 lat (msec): min=4, max=152, avg=68.66, stdev=21.60 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 47], 20.00th=[ 55], 00:20:41.645 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 75], 00:20:41.645 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 99], 00:20:41.645 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 153], 00:20:41.645 | 99.99th=[ 153] 00:20:41.645 bw ( KiB/s): min= 640, max= 1888, per=4.14%, avg=928.00, stdev=238.65, samples=20 00:20:41.645 iops : min= 160, max= 472, avg=232.00, stdev=59.66, samples=20 00:20:41.645 lat (msec) : 10=2.65%, 20=2.82%, 50=9.92%, 100=80.07%, 250=4.53% 00:20:41.645 cpu : usr=52.21%, sys=2.38%, ctx=1497, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=2.2%, 4=8.5%, 8=74.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.645 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.645 filename1: (groupid=0, jobs=1): err= 0: pid=83113: Tue Nov 26 19:28:38 2024 00:20:41.645 read: IOPS=240, BW=963KiB/s (986kB/s)(9644KiB/10018msec) 00:20:41.645 slat (usec): min=4, max=8082, avg=37.62, stdev=399.92 00:20:41.645 clat (msec): min=17, max=121, avg=66.29, stdev=17.08 00:20:41.645 lat (msec): min=17, max=121, avg=66.33, stdev=17.08 00:20:41.645 clat percentiles (msec): 00:20:41.645 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:20:41.645 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:41.645 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 94], 00:20:41.645 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:20:41.645 | 99.99th=[ 122] 00:20:41.645 bw ( KiB/s): min= 792, max= 1128, per=4.28%, avg=959.70, stdev=96.25, samples=20 00:20:41.645 iops : min= 198, max= 282, avg=239.90, stdev=24.02, samples=20 00:20:41.645 lat (msec) : 20=0.25%, 50=25.34%, 100=71.92%, 250=2.49% 00:20:41.645 cpu : usr=32.22%, sys=1.28%, ctx=911, majf=0, minf=9 00:20:41.645 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename1: (groupid=0, jobs=1): err= 0: pid=83114: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=227, BW=909KiB/s (931kB/s)(9116KiB/10028msec) 00:20:41.646 slat (usec): min=4, max=8038, avg=26.56, stdev=207.22 00:20:41.646 clat (msec): min=22, max=125, avg=70.18, stdev=16.71 00:20:41.646 lat (msec): min=22, max=125, avg=70.20, stdev=16.70 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 30], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:41.646 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:41.646 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 89], 95.00th=[ 99], 00:20:41.646 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:41.646 | 99.99th=[ 126] 00:20:41.646 bw ( KiB/s): min= 768, max= 1120, per=4.05%, avg=907.60, stdev=87.54, samples=20 00:20:41.646 iops : min= 192, max= 280, avg=226.90, stdev=21.88, samples=20 00:20:41.646 lat (msec) : 50=15.36%, 100=79.99%, 250=4.65% 00:20:41.646 cpu : usr=38.89%, sys=1.50%, ctx=1218, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=75.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=89.3%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename1: (groupid=0, jobs=1): err= 0: pid=83115: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=247, BW=990KiB/s (1013kB/s)(9900KiB/10004msec) 00:20:41.646 slat (usec): min=4, max=8037, avg=24.77, stdev=214.48 00:20:41.646 clat (msec): min=3, max=132, avg=64.57, stdev=19.50 00:20:41.646 lat (msec): min=3, max=132, avg=64.59, stdev=19.50 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:20:41.646 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 72], 00:20:41.646 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 93], 00:20:41.646 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:20:41.646 | 99.99th=[ 133] 00:20:41.646 bw ( KiB/s): min= 832, max= 1104, per=4.29%, avg=960.00, stdev=75.94, samples=19 00:20:41.646 iops : min= 208, max= 276, avg=240.00, stdev=18.99, samples=19 00:20:41.646 lat (msec) : 4=0.24%, 10=1.94%, 20=0.53%, 50=22.63%, 100=71.80% 00:20:41.646 lat (msec) : 250=2.87% 00:20:41.646 cpu : usr=37.05%, sys=1.49%, ctx=1095, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename1: (groupid=0, jobs=1): err= 0: pid=83116: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=238, BW=954KiB/s (977kB/s)(9552KiB/10015msec) 00:20:41.646 slat (usec): min=4, max=8040, avg=40.04, stdev=371.47 00:20:41.646 clat (msec): min=19, max=144, avg=66.92, stdev=17.03 00:20:41.646 lat (msec): min=19, max=144, avg=66.96, stdev=17.04 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:20:41.646 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:20:41.646 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 93], 00:20:41.646 | 99.00th=[ 122], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 144], 00:20:41.646 | 99.99th=[ 144] 00:20:41.646 bw ( KiB/s): min= 784, max= 1096, per=4.24%, avg=950.85, stdev=90.27, samples=20 00:20:41.646 iops : min= 196, max= 274, avg=237.70, stdev=22.56, samples=20 00:20:41.646 lat (msec) : 20=0.29%, 50=19.30%, 100=77.26%, 250=3.14% 00:20:41.646 cpu : usr=42.51%, sys=1.94%, ctx=1337, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename2: (groupid=0, jobs=1): err= 0: pid=83117: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=223, BW=895KiB/s (916kB/s)(8984KiB/10043msec) 00:20:41.646 slat (usec): min=7, max=3743, avg=18.51, stdev=93.08 00:20:41.646 clat (msec): min=9, max=126, avg=71.39, stdev=19.50 00:20:41.646 lat (msec): min=9, max=126, avg=71.41, stdev=19.50 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 15], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 56], 00:20:41.646 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:20:41.646 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 104], 00:20:41.646 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:20:41.646 | 99.99th=[ 127] 00:20:41.646 bw ( KiB/s): min= 656, max= 1424, per=3.98%, avg=892.00, stdev=156.50, samples=20 00:20:41.646 iops : min= 164, max= 356, avg=223.00, stdev=39.12, samples=20 00:20:41.646 lat (msec) : 10=0.71%, 20=1.42%, 50=9.35%, 100=82.64%, 250=5.88% 00:20:41.646 cpu : usr=41.70%, sys=1.62%, ctx=1398, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=74.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename2: (groupid=0, jobs=1): err= 0: pid=83118: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=227, BW=912KiB/s (934kB/s)(9124KiB/10007msec) 00:20:41.646 slat (usec): min=4, max=12039, avg=34.03, stdev=384.56 00:20:41.646 clat (msec): min=7, max=130, avg=70.00, stdev=17.21 00:20:41.646 lat (msec): min=7, max=130, avg=70.04, stdev=17.21 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:41.646 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:41.646 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 96], 00:20:41.646 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:41.646 | 99.99th=[ 131] 00:20:41.646 bw ( KiB/s): min= 761, max= 1008, per=4.01%, avg=898.16, stdev=78.82, samples=19 00:20:41.646 iops : min= 190, max= 252, avg=224.53, stdev=19.73, samples=19 00:20:41.646 lat (msec) : 10=0.39%, 20=0.31%, 50=17.01%, 100=77.69%, 250=4.60% 00:20:41.646 cpu : usr=35.54%, sys=1.65%, ctx=949, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename2: (groupid=0, jobs=1): err= 0: pid=83119: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=245, BW=983KiB/s (1007kB/s)(9836KiB/10002msec) 00:20:41.646 slat (usec): min=3, max=8063, avg=27.15, stdev=280.24 00:20:41.646 clat (msec): min=2, max=125, avg=64.96, stdev=20.54 00:20:41.646 lat (msec): min=2, max=125, avg=64.98, stdev=20.53 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:20:41.646 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:41.646 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 96], 00:20:41.646 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:41.646 | 99.99th=[ 126] 00:20:41.646 bw ( KiB/s): min= 768, max= 1024, per=4.19%, avg=939.79, stdev=69.76, samples=19 00:20:41.646 iops : min= 192, max= 256, avg=234.95, stdev=17.44, samples=19 00:20:41.646 lat (msec) : 4=1.38%, 10=2.28%, 20=0.28%, 50=21.47%, 100=71.53% 00:20:41.646 lat (msec) : 250=3.05% 00:20:41.646 cpu : usr=34.06%, sys=1.59%, ctx=982, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename2: (groupid=0, jobs=1): err= 0: pid=83120: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=200, BW=803KiB/s (823kB/s)(8048KiB/10017msec) 00:20:41.646 slat (usec): min=7, max=5031, avg=29.20, stdev=212.24 00:20:41.646 clat (msec): min=26, max=172, avg=79.42, stdev=19.69 00:20:41.646 lat (msec): min=26, max=172, avg=79.45, stdev=19.70 00:20:41.646 clat percentiles (msec): 00:20:41.646 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 67], 00:20:41.646 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 82], 00:20:41.646 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 103], 95.00th=[ 116], 00:20:41.646 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 174], 00:20:41.646 | 99.99th=[ 174] 00:20:41.646 bw ( KiB/s): min= 512, max= 1008, per=3.51%, avg=786.21, stdev=126.06, samples=19 00:20:41.646 iops : min= 128, max= 252, avg=196.53, stdev=31.49, samples=19 00:20:41.646 lat (msec) : 50=7.50%, 100=81.01%, 250=11.48% 00:20:41.646 cpu : usr=41.02%, sys=1.88%, ctx=1436, majf=0, minf=9 00:20:41.646 IO depths : 1=0.1%, 2=4.9%, 4=19.2%, 8=62.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:20:41.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 complete : 0=0.0%, 4=92.7%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.646 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.646 filename2: (groupid=0, jobs=1): err= 0: pid=83121: Tue Nov 26 19:28:38 2024 00:20:41.646 read: IOPS=232, BW=930KiB/s (953kB/s)(9336KiB/10034msec) 00:20:41.646 slat (usec): min=6, max=9049, avg=31.71, stdev=277.13 00:20:41.647 clat (msec): min=19, max=125, avg=68.55, stdev=16.32 00:20:41.647 lat (msec): min=19, max=125, avg=68.59, stdev=16.33 00:20:41.647 clat percentiles (msec): 00:20:41.647 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:20:41.647 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:20:41.647 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 96], 00:20:41.647 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 126], 00:20:41.647 | 99.99th=[ 126] 00:20:41.647 bw ( KiB/s): min= 768, max= 1080, per=4.15%, avg=929.65, stdev=98.91, samples=20 00:20:41.647 iops : min= 192, max= 270, avg=232.40, stdev=24.75, samples=20 00:20:41.647 lat (msec) : 20=0.09%, 50=16.45%, 100=80.21%, 250=3.26% 00:20:41.647 cpu : usr=39.22%, sys=1.58%, ctx=1197, majf=0, minf=0 00:20:41.647 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=76.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:41.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.647 filename2: (groupid=0, jobs=1): err= 0: pid=83122: Tue Nov 26 19:28:38 2024 00:20:41.647 read: IOPS=239, BW=957KiB/s (980kB/s)(9616KiB/10043msec) 00:20:41.647 slat (usec): min=7, max=8056, avg=42.35, stdev=346.11 00:20:41.647 clat (msec): min=15, max=125, avg=66.61, stdev=17.73 00:20:41.647 lat (msec): min=15, max=126, avg=66.65, stdev=17.74 00:20:41.647 clat percentiles (msec): 00:20:41.647 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 51], 00:20:41.647 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:20:41.647 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 94], 00:20:41.647 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 127], 99.95th=[ 127], 00:20:41.647 | 99.99th=[ 127] 00:20:41.647 bw ( KiB/s): min= 768, max= 1400, per=4.26%, avg=955.20, stdev=126.53, samples=20 00:20:41.647 iops : min= 192, max= 350, avg=238.80, stdev=31.63, samples=20 00:20:41.647 lat (msec) : 20=1.33%, 50=18.05%, 100=77.79%, 250=2.83% 00:20:41.647 cpu : usr=43.05%, sys=1.73%, ctx=1429, majf=0, minf=9 00:20:41.647 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:41.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 issued rwts: total=2404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.647 filename2: (groupid=0, jobs=1): err= 0: pid=83123: Tue Nov 26 19:28:38 2024 00:20:41.647 read: IOPS=236, BW=944KiB/s (967kB/s)(9492KiB/10053msec) 00:20:41.647 slat (usec): min=3, max=8028, avg=25.02, stdev=284.84 00:20:41.647 clat (msec): min=4, max=156, avg=67.60, stdev=21.06 00:20:41.647 lat (msec): min=4, max=156, avg=67.63, stdev=21.07 00:20:41.647 clat percentiles (msec): 00:20:41.647 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 47], 20.00th=[ 53], 00:20:41.647 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:20:41.647 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 96], 00:20:41.647 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:20:41.647 | 99.99th=[ 157] 00:20:41.647 bw ( KiB/s): min= 776, max= 1904, per=4.21%, avg=942.80, stdev=235.11, samples=20 00:20:41.647 iops : min= 194, max= 476, avg=235.70, stdev=58.78, samples=20 00:20:41.647 lat (msec) : 10=2.61%, 20=2.70%, 50=11.80%, 100=79.56%, 250=3.33% 00:20:41.647 cpu : usr=35.64%, sys=1.28%, ctx=1050, majf=0, minf=9 00:20:41.647 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.8%, 16=16.9%, 32=0.0%, >=64=0.0% 00:20:41.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.647 filename2: (groupid=0, jobs=1): err= 0: pid=83124: Tue Nov 26 19:28:38 2024 00:20:41.647 read: IOPS=222, BW=891KiB/s (912kB/s)(8920KiB/10012msec) 00:20:41.647 slat (usec): min=4, max=8036, avg=30.64, stdev=339.40 00:20:41.647 clat (msec): min=20, max=130, avg=71.68, stdev=16.16 00:20:41.647 lat (msec): min=20, max=130, avg=71.71, stdev=16.16 00:20:41.647 clat percentiles (msec): 00:20:41.647 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:41.647 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:20:41.647 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 96], 00:20:41.647 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:20:41.647 | 99.99th=[ 131] 00:20:41.647 bw ( KiB/s): min= 768, max= 992, per=3.96%, avg=886.80, stdev=66.85, samples=20 00:20:41.647 iops : min= 192, max= 248, avg=221.70, stdev=16.71, samples=20 00:20:41.647 lat (msec) : 50=12.91%, 100=83.00%, 250=4.08% 00:20:41.647 cpu : usr=32.49%, sys=1.33%, ctx=895, majf=0, minf=9 00:20:41.647 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:41.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.647 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:41.647 00:20:41.647 Run status group 0 (all jobs): 00:20:41.647 READ: bw=21.9MiB/s (22.9MB/s), 803KiB/s-1041KiB/s (823kB/s-1066kB/s), io=220MiB (231MB), run=10002-10061msec 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 bdev_null0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.647 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 [2024-11-26 19:28:38.412260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 bdev_null1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.648 { 00:20:41.648 "params": { 00:20:41.648 "name": "Nvme$subsystem", 00:20:41.648 "trtype": "$TEST_TRANSPORT", 00:20:41.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.648 "adrfam": "ipv4", 00:20:41.648 "trsvcid": "$NVMF_PORT", 00:20:41.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.648 "hdgst": ${hdgst:-false}, 00:20:41.648 "ddgst": ${ddgst:-false} 00:20:41.648 }, 00:20:41.648 "method": "bdev_nvme_attach_controller" 00:20:41.648 } 00:20:41.648 EOF 00:20:41.648 )") 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.648 { 00:20:41.648 "params": { 00:20:41.648 "name": "Nvme$subsystem", 00:20:41.648 "trtype": "$TEST_TRANSPORT", 00:20:41.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.648 "adrfam": "ipv4", 00:20:41.648 "trsvcid": "$NVMF_PORT", 00:20:41.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.648 "hdgst": ${hdgst:-false}, 00:20:41.648 "ddgst": ${ddgst:-false} 00:20:41.648 }, 00:20:41.648 "method": "bdev_nvme_attach_controller" 00:20:41.648 } 00:20:41.648 EOF 00:20:41.648 )") 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:41.648 "params": { 00:20:41.648 "name": "Nvme0", 00:20:41.648 "trtype": "tcp", 00:20:41.648 "traddr": "10.0.0.3", 00:20:41.648 "adrfam": "ipv4", 00:20:41.648 "trsvcid": "4420", 00:20:41.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.648 "hdgst": false, 00:20:41.648 "ddgst": false 00:20:41.648 }, 00:20:41.648 "method": "bdev_nvme_attach_controller" 00:20:41.648 },{ 00:20:41.648 "params": { 00:20:41.648 "name": "Nvme1", 00:20:41.648 "trtype": "tcp", 00:20:41.648 "traddr": "10.0.0.3", 00:20:41.648 "adrfam": "ipv4", 00:20:41.648 "trsvcid": "4420", 00:20:41.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.648 "hdgst": false, 00:20:41.648 "ddgst": false 00:20:41.648 }, 00:20:41.648 "method": "bdev_nvme_attach_controller" 00:20:41.648 }' 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.648 19:28:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.648 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:41.648 ... 00:20:41.648 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:41.648 ... 00:20:41.648 fio-3.35 00:20:41.648 Starting 4 threads 00:20:46.909 00:20:46.909 filename0: (groupid=0, jobs=1): err= 0: pid=83256: Tue Nov 26 19:28:44 2024 00:20:46.909 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:20:46.909 slat (nsec): min=6721, max=51926, avg=12101.58, stdev=4110.16 00:20:46.909 clat (usec): min=538, max=7024, avg=3754.64, stdev=815.86 00:20:46.909 lat (usec): min=549, max=7038, avg=3766.75, stdev=816.60 00:20:46.909 clat percentiles (usec): 00:20:46.909 | 1.00th=[ 1369], 5.00th=[ 1614], 10.00th=[ 2966], 20.00th=[ 3294], 00:20:46.909 | 30.00th=[ 3326], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4146], 00:20:46.909 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 5080], 00:20:46.909 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5997], 00:20:46.909 | 99.99th=[ 6980] 00:20:46.909 bw ( KiB/s): min=14848, max=18976, per=25.14%, avg=16624.00, stdev=1538.93, samples=9 00:20:46.909 iops : min= 1856, max= 2372, avg=2078.00, stdev=192.37, samples=9 00:20:46.909 lat (usec) : 750=0.03% 00:20:46.909 lat (msec) : 2=6.39%, 4=47.06%, 10=46.52% 00:20:46.909 cpu : usr=91.92%, sys=7.26%, ctx=7, majf=0, minf=0 00:20:46.909 IO depths : 1=0.1%, 2=11.5%, 4=60.4%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.909 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.909 issued rwts: total=10545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:46.909 filename0: (groupid=0, jobs=1): err= 0: pid=83257: Tue Nov 26 19:28:44 2024 00:20:46.909 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5002msec) 00:20:46.909 slat (nsec): min=3782, max=79805, avg=15294.94, stdev=3693.98 00:20:46.909 clat (usec): min=761, max=7645, avg=4008.65, stdev=586.98 00:20:46.909 lat (usec): min=770, max=7662, avg=4023.95, stdev=586.95 00:20:46.909 clat percentiles (usec): 00:20:46.909 | 1.00th=[ 2343], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326], 00:20:46.909 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 4146], 60.00th=[ 4178], 00:20:46.909 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5080], 00:20:46.909 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 6128], 99.95th=[ 6128], 00:20:46.909 | 99.99th=[ 7635] 00:20:46.909 bw ( KiB/s): min=14848, max=16912, per=23.90%, avg=15802.56, stdev=730.94, samples=9 00:20:46.909 iops : min= 1856, max= 2114, avg=1975.22, stdev=91.33, samples=9 00:20:46.909 lat (usec) : 1000=0.12% 00:20:46.909 lat (msec) : 2=0.58%, 4=41.37%, 10=57.93% 00:20:46.910 cpu : usr=92.72%, sys=6.48%, ctx=5, majf=0, minf=1 00:20:46.910 IO depths : 1=0.1%, 2=15.8%, 4=57.8%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 issued rwts: total=9853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:46.910 filename1: (groupid=0, jobs=1): err= 0: pid=83258: Tue Nov 26 19:28:44 2024 00:20:46.910 read: IOPS=1981, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5002msec) 00:20:46.910 slat (nsec): min=4810, max=55327, avg=15513.28, stdev=4123.33 00:20:46.910 clat (usec): min=981, max=7241, avg=3985.13, stdev=600.40 00:20:46.910 lat (usec): min=989, max=7262, avg=4000.64, stdev=600.72 00:20:46.910 clat percentiles (usec): 00:20:46.910 | 1.00th=[ 2040], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326], 00:20:46.910 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 4146], 60.00th=[ 4178], 00:20:46.910 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5080], 00:20:46.910 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6063], 99.95th=[ 6259], 00:20:46.910 | 99.99th=[ 7242] 00:20:46.910 bw ( KiB/s): min=14848, max=16912, per=24.06%, avg=15905.78, stdev=776.18, samples=9 00:20:46.910 iops : min= 1856, max= 2114, avg=1988.22, stdev=97.02, samples=9 00:20:46.910 lat (usec) : 1000=0.02% 00:20:46.910 lat (msec) : 2=0.86%, 4=42.12%, 10=57.00% 00:20:46.910 cpu : usr=92.28%, sys=6.92%, ctx=9, majf=0, minf=0 00:20:46.910 IO depths : 1=0.1%, 2=15.3%, 4=58.1%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 issued rwts: total=9909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:46.910 filename1: (groupid=0, jobs=1): err= 0: pid=83259: Tue Nov 26 19:28:44 2024 00:20:46.910 read: IOPS=2205, BW=17.2MiB/s (18.1MB/s)(86.2MiB/5002msec) 00:20:46.910 slat (nsec): min=4810, max=55368, avg=13502.35, stdev=4479.06 00:20:46.910 clat (usec): min=762, max=6750, avg=3585.90, stdev=964.73 00:20:46.910 lat (usec): min=772, max=6764, avg=3599.40, stdev=965.58 00:20:46.910 clat percentiles (usec): 00:20:46.910 | 1.00th=[ 1369], 5.00th=[ 1401], 10.00th=[ 1434], 20.00th=[ 3261], 00:20:46.910 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3818], 60.00th=[ 3916], 00:20:46.910 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5080], 00:20:46.910 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5997], 99.95th=[ 6259], 00:20:46.910 | 99.99th=[ 6587] 00:20:46.910 bw ( KiB/s): min=15808, max=20592, per=27.08%, avg=17907.56, stdev=2047.22, samples=9 00:20:46.910 iops : min= 1978, max= 2574, avg=2238.44, stdev=255.90, samples=9 00:20:46.910 lat (usec) : 1000=0.07% 00:20:46.910 lat (msec) : 2=11.28%, 4=51.96%, 10=36.68% 00:20:46.910 cpu : usr=92.06%, sys=7.06%, ctx=9, majf=0, minf=0 00:20:46.910 IO depths : 1=0.1%, 2=6.9%, 4=62.3%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.910 issued rwts: total=11033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:46.910 00:20:46.910 Run status group 0 (all jobs): 00:20:46.910 READ: bw=64.6MiB/s (67.7MB/s), 15.4MiB/s-17.2MiB/s (16.1MB/s-18.1MB/s), io=323MiB (339MB), run=5002-5002msec 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 ************************************ 00:20:46.910 END TEST fio_dif_rand_params 00:20:46.910 ************************************ 00:20:46.910 00:20:46.910 real 0m23.754s 00:20:46.910 user 2m4.696s 00:20:46.910 sys 0m7.302s 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:46.910 19:28:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.910 19:28:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 ************************************ 00:20:46.910 START TEST fio_dif_digest 00:20:46.910 ************************************ 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 bdev_null0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.910 [2024-11-26 19:28:44.689207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.910 { 00:20:46.910 "params": { 00:20:46.910 "name": "Nvme$subsystem", 00:20:46.910 "trtype": "$TEST_TRANSPORT", 00:20:46.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.910 "adrfam": "ipv4", 00:20:46.910 "trsvcid": "$NVMF_PORT", 00:20:46.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.910 "hdgst": ${hdgst:-false}, 00:20:46.910 "ddgst": ${ddgst:-false} 00:20:46.910 }, 00:20:46.910 "method": "bdev_nvme_attach_controller" 00:20:46.910 } 00:20:46.910 EOF 00:20:46.910 )") 00:20:46.910 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.911 "params": { 00:20:46.911 "name": "Nvme0", 00:20:46.911 "trtype": "tcp", 00:20:46.911 "traddr": "10.0.0.3", 00:20:46.911 "adrfam": "ipv4", 00:20:46.911 "trsvcid": "4420", 00:20:46.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.911 "hdgst": true, 00:20:46.911 "ddgst": true 00:20:46.911 }, 00:20:46.911 "method": "bdev_nvme_attach_controller" 00:20:46.911 }' 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:46.911 19:28:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.911 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:46.911 ... 00:20:46.911 fio-3.35 00:20:46.911 Starting 3 threads 00:20:59.174 00:20:59.174 filename0: (groupid=0, jobs=1): err= 0: pid=83365: Tue Nov 26 19:28:55 2024 00:20:59.174 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10001msec) 00:20:59.174 slat (usec): min=8, max=111, avg=26.88, stdev=14.22 00:20:59.174 clat (usec): min=12984, max=15896, avg=13355.39, stdev=346.82 00:20:59.174 lat (usec): min=13005, max=15933, avg=13382.27, stdev=349.51 00:20:59.174 clat percentiles (usec): 00:20:59.174 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:20:59.174 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:20:59.174 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:20:59.174 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15795], 99.95th=[15926], 00:20:59.174 | 99.99th=[15926] 00:20:59.175 bw ( KiB/s): min=27648, max=29184, per=33.36%, avg=28621.00, stdev=558.18, samples=19 00:20:59.175 iops : min= 216, max= 228, avg=223.58, stdev= 4.40, samples=19 00:20:59.175 lat (msec) : 20=100.00% 00:20:59.175 cpu : usr=92.08%, sys=7.12%, ctx=23, majf=0, minf=0 00:20:59.175 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.175 filename0: (groupid=0, jobs=1): err= 0: pid=83366: Tue Nov 26 19:28:55 2024 00:20:59.175 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10001msec) 00:20:59.175 slat (usec): min=8, max=128, avg=27.90, stdev=12.53 00:20:59.175 clat (usec): min=12913, max=15858, avg=13358.38, stdev=349.06 00:20:59.175 lat (usec): min=12927, max=15895, avg=13386.27, stdev=352.75 00:20:59.175 clat percentiles (usec): 00:20:59.175 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:20:59.175 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:20:59.175 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:20:59.175 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:20:59.175 | 99.99th=[15795] 00:20:59.175 bw ( KiB/s): min=27648, max=29184, per=33.36%, avg=28618.11, stdev=563.32, samples=19 00:20:59.175 iops : min= 216, max= 228, avg=223.58, stdev= 4.40, samples=19 00:20:59.175 lat (msec) : 20=100.00% 00:20:59.175 cpu : usr=92.09%, sys=6.98%, ctx=80, majf=0, minf=0 00:20:59.175 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.175 filename0: (groupid=0, jobs=1): err= 0: pid=83367: Tue Nov 26 19:28:55 2024 00:20:59.175 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(280MiB/10008msec) 00:20:59.175 slat (nsec): min=5403, max=77765, avg=27353.85, stdev=12827.36 00:20:59.175 clat (usec): min=10960, max=15854, avg=13349.69, stdev=362.99 00:20:59.175 lat (usec): min=10965, max=15880, avg=13377.05, stdev=367.05 00:20:59.175 clat percentiles (usec): 00:20:59.175 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:20:59.175 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:20:59.175 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:20:59.175 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:20:59.175 | 99.99th=[15795] 00:20:59.175 bw ( KiB/s): min=27648, max=29184, per=33.36%, avg=28618.11, stdev=563.32, samples=19 00:20:59.175 iops : min= 216, max= 228, avg=223.58, stdev= 4.40, samples=19 00:20:59.175 lat (msec) : 20=100.00% 00:20:59.175 cpu : usr=92.53%, sys=6.86%, ctx=105, majf=0, minf=0 00:20:59.175 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.175 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.175 00:20:59.175 Run status group 0 (all jobs): 00:20:59.175 READ: bw=83.8MiB/s (87.9MB/s), 27.9MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=839MiB (879MB), run=10001-10008msec 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.175 00:20:59.175 real 0m11.083s 00:20:59.175 user 0m28.394s 00:20:59.175 sys 0m2.390s 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.175 ************************************ 00:20:59.175 END TEST fio_dif_digest 00:20:59.175 ************************************ 00:20:59.175 19:28:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:59.175 19:28:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:59.175 19:28:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.175 rmmod nvme_tcp 00:20:59.175 rmmod nvme_fabrics 00:20:59.175 rmmod nvme_keyring 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82622 ']' 00:20:59.175 19:28:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82622 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82622 ']' 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82622 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82622 00:20:59.175 killing process with pid 82622 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82622' 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82622 00:20:59.175 19:28:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82622 00:20:59.175 19:28:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:59.175 19:28:56 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:59.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.175 Waiting for block devices as requested 00:20:59.175 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.176 19:28:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:59.176 19:28:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.176 19:28:56 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:59.176 00:20:59.176 real 0m59.959s 00:20:59.176 user 3m49.871s 00:20:59.176 sys 0m18.208s 00:20:59.176 19:28:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.176 19:28:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:59.176 ************************************ 00:20:59.176 END TEST nvmf_dif 00:20:59.176 ************************************ 00:20:59.176 19:28:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:59.176 19:28:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:59.176 19:28:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.176 19:28:56 -- common/autotest_common.sh@10 -- # set +x 00:20:59.176 ************************************ 00:20:59.176 START TEST nvmf_abort_qd_sizes 00:20:59.176 ************************************ 00:20:59.176 19:28:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:59.176 * Looking for test storage... 00:20:59.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.176 --rc genhtml_branch_coverage=1 00:20:59.176 --rc genhtml_function_coverage=1 00:20:59.176 --rc genhtml_legend=1 00:20:59.176 --rc geninfo_all_blocks=1 00:20:59.176 --rc geninfo_unexecuted_blocks=1 00:20:59.176 00:20:59.176 ' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.176 --rc genhtml_branch_coverage=1 00:20:59.176 --rc genhtml_function_coverage=1 00:20:59.176 --rc genhtml_legend=1 00:20:59.176 --rc geninfo_all_blocks=1 00:20:59.176 --rc geninfo_unexecuted_blocks=1 00:20:59.176 00:20:59.176 ' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.176 --rc genhtml_branch_coverage=1 00:20:59.176 --rc genhtml_function_coverage=1 00:20:59.176 --rc genhtml_legend=1 00:20:59.176 --rc geninfo_all_blocks=1 00:20:59.176 --rc geninfo_unexecuted_blocks=1 00:20:59.176 00:20:59.176 ' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.176 --rc genhtml_branch_coverage=1 00:20:59.176 --rc genhtml_function_coverage=1 00:20:59.176 --rc genhtml_legend=1 00:20:59.176 --rc geninfo_all_blocks=1 00:20:59.176 --rc geninfo_unexecuted_blocks=1 00:20:59.176 00:20:59.176 ' 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.176 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:59.177 Cannot find device "nvmf_init_br" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:59.177 Cannot find device "nvmf_init_br2" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:59.177 Cannot find device "nvmf_tgt_br" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.177 Cannot find device "nvmf_tgt_br2" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:59.177 Cannot find device "nvmf_init_br" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:59.177 Cannot find device "nvmf_init_br2" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:59.177 Cannot find device "nvmf_tgt_br" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:59.177 Cannot find device "nvmf_tgt_br2" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:59.177 Cannot find device "nvmf_br" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:59.177 Cannot find device "nvmf_init_if" 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:59.177 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:59.177 Cannot find device "nvmf_init_if2" 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:59.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:59.178 00:20:59.178 --- 10.0.0.3 ping statistics --- 00:20:59.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.178 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:59.178 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:59.178 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:20:59.178 00:20:59.178 --- 10.0.0.4 ping statistics --- 00:20:59.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.178 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:59.178 00:20:59.178 --- 10.0.0.1 ping statistics --- 00:20:59.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.178 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:59.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:20:59.178 00:20:59.178 --- 10.0.0.2 ping statistics --- 00:20:59.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.178 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:59.178 19:28:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:59.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.014 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:00.014 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84011 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84011 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84011 ']' 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.014 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.014 [2024-11-26 19:28:58.442466] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:00.014 [2024-11-26 19:28:58.442566] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.273 [2024-11-26 19:28:58.596834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.273 [2024-11-26 19:28:58.669832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.273 [2024-11-26 19:28:58.669928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.273 [2024-11-26 19:28:58.669944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.273 [2024-11-26 19:28:58.669955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.273 [2024-11-26 19:28:58.669964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.273 [2024-11-26 19:28:58.671291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.273 [2024-11-26 19:28:58.671439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.273 [2024-11-26 19:28:58.671600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.273 [2024-11-26 19:28:58.671609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.531 [2024-11-26 19:28:58.730038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.531 19:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:00.531 ************************************ 00:21:00.531 START TEST spdk_target_abort 00:21:00.531 ************************************ 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:00.531 spdk_targetn1 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:00.531 [2024-11-26 19:28:58.953214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.531 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:00.791 [2024-11-26 19:28:58.990107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.791 19:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:04.075 Initializing NVMe Controllers 00:21:04.075 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:04.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:04.075 Initialization complete. Launching workers. 00:21:04.075 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11068, failed: 0 00:21:04.075 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1018, failed to submit 10050 00:21:04.075 success 861, unsuccessful 157, failed 0 00:21:04.075 19:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:04.075 19:29:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:07.365 Initializing NVMe Controllers 00:21:07.365 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:07.365 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:07.365 Initialization complete. Launching workers. 00:21:07.366 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9019, failed: 0 00:21:07.366 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1167, failed to submit 7852 00:21:07.366 success 410, unsuccessful 757, failed 0 00:21:07.366 19:29:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:07.366 19:29:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:10.647 Initializing NVMe Controllers 00:21:10.647 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:10.647 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:10.647 Initialization complete. Launching workers. 00:21:10.647 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31555, failed: 0 00:21:10.647 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2169, failed to submit 29386 00:21:10.647 success 453, unsuccessful 1716, failed 0 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.647 19:29:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84011 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84011 ']' 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84011 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84011 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.211 killing process with pid 84011 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84011' 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84011 00:21:11.211 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84011 00:21:11.471 00:21:11.471 real 0m10.790s 00:21:11.471 user 0m41.379s 00:21:11.471 sys 0m2.210s 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:11.471 ************************************ 00:21:11.471 END TEST spdk_target_abort 00:21:11.471 ************************************ 00:21:11.471 19:29:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:11.471 19:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.471 19:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.471 19:29:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:11.471 ************************************ 00:21:11.471 START TEST kernel_target_abort 00:21:11.471 ************************************ 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:11.471 19:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:11.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.729 Waiting for block devices as requested 00:21:11.729 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.987 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:11.987 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:11.988 No valid GPT data, bailing 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:11.988 No valid GPT data, bailing 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:11.988 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:12.246 No valid GPT data, bailing 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:12.246 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:12.247 No valid GPT data, bailing 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 --hostid=560f6fb4-1392-4f8a-a310-a32d17cc4390 -a 10.0.0.1 -t tcp -s 4420 00:21:12.247 00:21:12.247 Discovery Log Number of Records 2, Generation counter 2 00:21:12.247 =====Discovery Log Entry 0====== 00:21:12.247 trtype: tcp 00:21:12.247 adrfam: ipv4 00:21:12.247 subtype: current discovery subsystem 00:21:12.247 treq: not specified, sq flow control disable supported 00:21:12.247 portid: 1 00:21:12.247 trsvcid: 4420 00:21:12.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:12.247 traddr: 10.0.0.1 00:21:12.247 eflags: none 00:21:12.247 sectype: none 00:21:12.247 =====Discovery Log Entry 1====== 00:21:12.247 trtype: tcp 00:21:12.247 adrfam: ipv4 00:21:12.247 subtype: nvme subsystem 00:21:12.247 treq: not specified, sq flow control disable supported 00:21:12.247 portid: 1 00:21:12.247 trsvcid: 4420 00:21:12.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:12.247 traddr: 10.0.0.1 00:21:12.247 eflags: none 00:21:12.247 sectype: none 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:12.247 19:29:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:15.529 Initializing NVMe Controllers 00:21:15.529 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:15.529 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:15.529 Initialization complete. Launching workers. 00:21:15.529 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33046, failed: 0 00:21:15.529 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33046, failed to submit 0 00:21:15.529 success 0, unsuccessful 33046, failed 0 00:21:15.529 19:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:15.529 19:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.814 Initializing NVMe Controllers 00:21:18.814 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:18.814 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:18.814 Initialization complete. Launching workers. 00:21:18.814 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70818, failed: 0 00:21:18.814 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30860, failed to submit 39958 00:21:18.814 success 0, unsuccessful 30860, failed 0 00:21:18.814 19:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:18.814 19:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.127 Initializing NVMe Controllers 00:21:22.127 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.127 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.127 Initialization complete. Launching workers. 00:21:22.127 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82596, failed: 0 00:21:22.127 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20594, failed to submit 62002 00:21:22.127 success 0, unsuccessful 20594, failed 0 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:22.127 19:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:22.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.595 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.595 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.595 00:21:24.595 real 0m13.001s 00:21:24.595 user 0m6.406s 00:21:24.595 sys 0m4.042s 00:21:24.595 ************************************ 00:21:24.595 END TEST kernel_target_abort 00:21:24.595 ************************************ 00:21:24.595 19:29:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.595 19:29:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.595 rmmod nvme_tcp 00:21:24.595 rmmod nvme_fabrics 00:21:24.595 rmmod nvme_keyring 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84011 ']' 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84011 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84011 ']' 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84011 00:21:24.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84011) - No such process 00:21:24.595 Process with pid 84011 is not found 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84011 is not found' 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:24.595 19:29:22 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:24.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.853 Waiting for block devices as requested 00:21:24.853 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.111 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:25.111 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:25.369 ************************************ 00:21:25.369 END TEST nvmf_abort_qd_sizes 00:21:25.369 ************************************ 00:21:25.369 00:21:25.369 real 0m26.737s 00:21:25.369 user 0m48.938s 00:21:25.369 sys 0m7.655s 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.369 19:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.369 19:29:23 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:25.369 19:29:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.369 19:29:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.369 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:21:25.369 ************************************ 00:21:25.369 START TEST keyring_file 00:21:25.369 ************************************ 00:21:25.369 19:29:23 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:25.628 * Looking for test storage... 00:21:25.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.628 19:29:23 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.628 --rc genhtml_branch_coverage=1 00:21:25.628 --rc genhtml_function_coverage=1 00:21:25.628 --rc genhtml_legend=1 00:21:25.628 --rc geninfo_all_blocks=1 00:21:25.628 --rc geninfo_unexecuted_blocks=1 00:21:25.628 00:21:25.628 ' 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.628 --rc genhtml_branch_coverage=1 00:21:25.628 --rc genhtml_function_coverage=1 00:21:25.628 --rc genhtml_legend=1 00:21:25.628 --rc geninfo_all_blocks=1 00:21:25.628 --rc geninfo_unexecuted_blocks=1 00:21:25.628 00:21:25.628 ' 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.628 --rc genhtml_branch_coverage=1 00:21:25.628 --rc genhtml_function_coverage=1 00:21:25.628 --rc genhtml_legend=1 00:21:25.628 --rc geninfo_all_blocks=1 00:21:25.628 --rc geninfo_unexecuted_blocks=1 00:21:25.628 00:21:25.628 ' 00:21:25.628 19:29:23 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.629 --rc genhtml_branch_coverage=1 00:21:25.629 --rc genhtml_function_coverage=1 00:21:25.629 --rc genhtml_legend=1 00:21:25.629 --rc geninfo_all_blocks=1 00:21:25.629 --rc geninfo_unexecuted_blocks=1 00:21:25.629 00:21:25.629 ' 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.629 19:29:23 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.629 19:29:23 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.629 19:29:23 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.629 19:29:23 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.629 19:29:23 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.629 19:29:23 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.629 19:29:23 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.629 19:29:23 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:25.629 19:29:23 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:25.629 19:29:23 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ojS2i0wWNB 00:21:25.629 19:29:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:25.629 19:29:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ojS2i0wWNB 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ojS2i0wWNB 00:21:25.629 19:29:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ojS2i0wWNB 00:21:25.629 19:29:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.p9n3eZqwNc 00:21:25.629 19:29:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:25.629 19:29:24 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:25.888 19:29:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.p9n3eZqwNc 00:21:25.888 19:29:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.p9n3eZqwNc 00:21:25.888 19:29:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.p9n3eZqwNc 00:21:25.888 19:29:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=84920 00:21:25.888 19:29:24 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.888 19:29:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84920 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84920 ']' 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.888 19:29:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:25.888 [2024-11-26 19:29:24.158135] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:25.888 [2024-11-26 19:29:24.158251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84920 ] 00:21:25.888 [2024-11-26 19:29:24.314920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.147 [2024-11-26 19:29:24.384048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.147 [2024-11-26 19:29:24.463107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:27.083 19:29:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:27.083 [2024-11-26 19:29:25.181143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.083 null0 00:21:27.083 [2024-11-26 19:29:25.213113] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.083 [2024-11-26 19:29:25.213321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.083 19:29:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:27.083 [2024-11-26 19:29:25.241112] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:27.083 request: 00:21:27.083 { 00:21:27.083 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.083 "secure_channel": false, 00:21:27.083 "listen_address": { 00:21:27.083 "trtype": "tcp", 00:21:27.083 "traddr": "127.0.0.1", 00:21:27.083 "trsvcid": "4420" 00:21:27.083 }, 00:21:27.083 "method": "nvmf_subsystem_add_listener", 00:21:27.083 "req_id": 1 00:21:27.083 } 00:21:27.083 Got JSON-RPC error response 00:21:27.083 response: 00:21:27.083 { 00:21:27.083 "code": -32602, 00:21:27.083 "message": "Invalid parameters" 00:21:27.083 } 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:27.083 19:29:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.084 19:29:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=84937 00:21:27.084 19:29:25 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:27.084 19:29:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84937 /var/tmp/bperf.sock 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84937 ']' 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.084 19:29:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:27.084 [2024-11-26 19:29:25.306870] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:27.084 [2024-11-26 19:29:25.307014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84937 ] 00:21:27.084 [2024-11-26 19:29:25.455042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.084 [2024-11-26 19:29:25.515349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.342 [2024-11-26 19:29:25.568477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.908 19:29:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.908 19:29:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:27.908 19:29:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:27.908 19:29:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:28.167 19:29:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p9n3eZqwNc 00:21:28.167 19:29:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p9n3eZqwNc 00:21:28.425 19:29:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:28.425 19:29:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:28.425 19:29:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.425 19:29:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.425 19:29:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.684 19:29:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ojS2i0wWNB == \/\t\m\p\/\t\m\p\.\o\j\S\2\i\0\w\W\N\B ]] 00:21:28.684 19:29:27 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:28.684 19:29:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:28.684 19:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.684 19:29:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.684 19:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:29.253 19:29:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.p9n3eZqwNc == \/\t\m\p\/\t\m\p\.\p\9\n\3\e\Z\q\w\N\c ]] 00:21:29.253 19:29:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:29.253 19:29:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:29.253 19:29:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:29.253 19:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:29.253 19:29:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.253 19:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:29.511 19:29:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:29.511 19:29:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.511 19:29:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:29.511 19:29:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:29.511 19:29:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:29.770 [2024-11-26 19:29:28.160852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.027 nvme0n1 00:21:30.028 19:29:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:30.028 19:29:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:30.028 19:29:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.028 19:29:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.028 19:29:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:30.028 19:29:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.295 19:29:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:30.295 19:29:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:30.295 19:29:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:30.295 19:29:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.295 19:29:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:30.295 19:29:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.295 19:29:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.577 19:29:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:30.577 19:29:28 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:30.577 Running I/O for 1 seconds... 00:21:31.510 11727.00 IOPS, 45.81 MiB/s 00:21:31.510 Latency(us) 00:21:31.510 [2024-11-26T19:29:29.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.510 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:31.510 nvme0n1 : 1.01 11769.23 45.97 0.00 0.00 10844.53 6911.07 19184.17 00:21:31.510 [2024-11-26T19:29:29.950Z] =================================================================================================================== 00:21:31.510 [2024-11-26T19:29:29.950Z] Total : 11769.23 45.97 0.00 0.00 10844.53 6911.07 19184.17 00:21:31.510 { 00:21:31.510 "results": [ 00:21:31.510 { 00:21:31.510 "job": "nvme0n1", 00:21:31.510 "core_mask": "0x2", 00:21:31.510 "workload": "randrw", 00:21:31.510 "percentage": 50, 00:21:31.510 "status": "finished", 00:21:31.510 "queue_depth": 128, 00:21:31.510 "io_size": 4096, 00:21:31.510 "runtime": 1.007288, 00:21:31.510 "iops": 11769.225881773633, 00:21:31.510 "mibps": 45.97353860067825, 00:21:31.510 "io_failed": 0, 00:21:31.510 "io_timeout": 0, 00:21:31.510 "avg_latency_us": 10844.53071461984, 00:21:31.510 "min_latency_us": 6911.069090909091, 00:21:31.510 "max_latency_us": 19184.174545454545 00:21:31.510 } 00:21:31.510 ], 00:21:31.510 "core_count": 1 00:21:31.510 } 00:21:31.768 19:29:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:31.768 19:29:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:32.027 19:29:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:32.027 19:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:32.027 19:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.027 19:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.027 19:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.027 19:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.286 19:29:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:32.286 19:29:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:32.286 19:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:32.286 19:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.286 19:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:32.286 19:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.286 19:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.567 19:29:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:32.567 19:29:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:32.567 19:29:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:32.567 19:29:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:32.567 19:29:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:32.567 19:29:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.567 19:29:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:32.568 19:29:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.568 19:29:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:32.568 19:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:32.826 [2024-11-26 19:29:31.217868] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:32.826 [2024-11-26 19:29:31.218535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6d5d0 (107): Transport endpoint is not connected 00:21:32.826 [2024-11-26 19:29:31.219526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6d5d0 (9): Bad file descriptor 00:21:32.826 [2024-11-26 19:29:31.220523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:32.827 [2024-11-26 19:29:31.220548] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:32.827 [2024-11-26 19:29:31.220578] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:32.827 [2024-11-26 19:29:31.220589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:32.827 request: 00:21:32.827 { 00:21:32.827 "name": "nvme0", 00:21:32.827 "trtype": "tcp", 00:21:32.827 "traddr": "127.0.0.1", 00:21:32.827 "adrfam": "ipv4", 00:21:32.827 "trsvcid": "4420", 00:21:32.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:32.827 "prchk_reftag": false, 00:21:32.827 "prchk_guard": false, 00:21:32.827 "hdgst": false, 00:21:32.827 "ddgst": false, 00:21:32.827 "psk": "key1", 00:21:32.827 "allow_unrecognized_csi": false, 00:21:32.827 "method": "bdev_nvme_attach_controller", 00:21:32.827 "req_id": 1 00:21:32.827 } 00:21:32.827 Got JSON-RPC error response 00:21:32.827 response: 00:21:32.827 { 00:21:32.827 "code": -5, 00:21:32.827 "message": "Input/output error" 00:21:32.827 } 00:21:32.827 19:29:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:32.827 19:29:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.827 19:29:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.827 19:29:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.827 19:29:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:32.827 19:29:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:32.827 19:29:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.827 19:29:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.827 19:29:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.827 19:29:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.085 19:29:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:33.085 19:29:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:33.085 19:29:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:33.085 19:29:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.345 19:29:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:33.345 19:29:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.345 19:29:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.345 19:29:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:33.345 19:29:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:33.345 19:29:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:33.604 19:29:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:33.604 19:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:34.171 19:29:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:34.171 19:29:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:34.171 19:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.171 19:29:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:34.171 19:29:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ojS2i0wWNB 00:21:34.171 19:29:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.171 19:29:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.171 19:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.430 [2024-11-26 19:29:32.840112] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ojS2i0wWNB': 0100660 00:21:34.430 [2024-11-26 19:29:32.840379] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:34.430 request: 00:21:34.430 { 00:21:34.430 "name": "key0", 00:21:34.430 "path": "/tmp/tmp.ojS2i0wWNB", 00:21:34.430 "method": "keyring_file_add_key", 00:21:34.430 "req_id": 1 00:21:34.430 } 00:21:34.430 Got JSON-RPC error response 00:21:34.430 response: 00:21:34.430 { 00:21:34.430 "code": -1, 00:21:34.430 "message": "Operation not permitted" 00:21:34.430 } 00:21:34.431 19:29:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:34.431 19:29:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.431 19:29:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.431 19:29:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.431 19:29:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ojS2i0wWNB 00:21:34.431 19:29:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.431 19:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojS2i0wWNB 00:21:34.689 19:29:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ojS2i0wWNB 00:21:34.689 19:29:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:34.689 19:29:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.689 19:29:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.689 19:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.689 19:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.689 19:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.257 19:29:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:35.257 19:29:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.257 19:29:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.257 19:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.257 [2024-11-26 19:29:33.680305] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ojS2i0wWNB': No such file or directory 00:21:35.257 [2024-11-26 19:29:33.680350] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:35.257 [2024-11-26 19:29:33.680389] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:35.257 [2024-11-26 19:29:33.680398] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:35.257 [2024-11-26 19:29:33.680408] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:35.257 [2024-11-26 19:29:33.680416] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:35.257 request: 00:21:35.257 { 00:21:35.257 "name": "nvme0", 00:21:35.257 "trtype": "tcp", 00:21:35.257 "traddr": "127.0.0.1", 00:21:35.257 "adrfam": "ipv4", 00:21:35.257 "trsvcid": "4420", 00:21:35.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.257 "prchk_reftag": false, 00:21:35.257 "prchk_guard": false, 00:21:35.257 "hdgst": false, 00:21:35.257 "ddgst": false, 00:21:35.257 "psk": "key0", 00:21:35.257 "allow_unrecognized_csi": false, 00:21:35.257 "method": "bdev_nvme_attach_controller", 00:21:35.257 "req_id": 1 00:21:35.257 } 00:21:35.257 Got JSON-RPC error response 00:21:35.257 response: 00:21:35.257 { 00:21:35.257 "code": -19, 00:21:35.257 "message": "No such device" 00:21:35.257 } 00:21:35.515 19:29:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:35.515 19:29:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.515 19:29:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.515 19:29:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.515 19:29:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:35.515 19:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:35.774 19:29:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jYbns2FOxA 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:35.774 19:29:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jYbns2FOxA 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jYbns2FOxA 00:21:35.774 19:29:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.jYbns2FOxA 00:21:35.774 19:29:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jYbns2FOxA 00:21:35.774 19:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jYbns2FOxA 00:21:36.033 19:29:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.033 19:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.291 nvme0n1 00:21:36.291 19:29:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:36.291 19:29:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:36.291 19:29:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.291 19:29:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.291 19:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.291 19:29:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:36.550 19:29:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:36.550 19:29:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:36.550 19:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:36.809 19:29:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:36.809 19:29:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:36.809 19:29:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.809 19:29:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:36.809 19:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.068 19:29:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:37.068 19:29:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:37.068 19:29:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.068 19:29:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.068 19:29:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.068 19:29:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.068 19:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.635 19:29:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:37.635 19:29:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:37.635 19:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:37.635 19:29:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:37.635 19:29:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:37.635 19:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.202 19:29:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:38.202 19:29:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jYbns2FOxA 00:21:38.202 19:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jYbns2FOxA 00:21:38.202 19:29:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p9n3eZqwNc 00:21:38.202 19:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p9n3eZqwNc 00:21:38.461 19:29:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:38.461 19:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:39.029 nvme0n1 00:21:39.029 19:29:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:39.029 19:29:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:39.288 19:29:37 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:39.288 "subsystems": [ 00:21:39.288 { 00:21:39.288 "subsystem": "keyring", 00:21:39.288 "config": [ 00:21:39.288 { 00:21:39.288 "method": "keyring_file_add_key", 00:21:39.288 "params": { 00:21:39.288 "name": "key0", 00:21:39.288 "path": "/tmp/tmp.jYbns2FOxA" 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "keyring_file_add_key", 00:21:39.288 "params": { 00:21:39.288 "name": "key1", 00:21:39.288 "path": "/tmp/tmp.p9n3eZqwNc" 00:21:39.288 } 00:21:39.288 } 00:21:39.288 ] 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "subsystem": "iobuf", 00:21:39.288 "config": [ 00:21:39.288 { 00:21:39.288 "method": "iobuf_set_options", 00:21:39.288 "params": { 00:21:39.288 "small_pool_count": 8192, 00:21:39.288 "large_pool_count": 1024, 00:21:39.288 "small_bufsize": 8192, 00:21:39.288 "large_bufsize": 135168, 00:21:39.288 "enable_numa": false 00:21:39.288 } 00:21:39.288 } 00:21:39.288 ] 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "subsystem": "sock", 00:21:39.288 "config": [ 00:21:39.288 { 00:21:39.288 "method": "sock_set_default_impl", 00:21:39.288 "params": { 00:21:39.288 "impl_name": "uring" 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "sock_impl_set_options", 00:21:39.288 "params": { 00:21:39.288 "impl_name": "ssl", 00:21:39.288 "recv_buf_size": 4096, 00:21:39.288 "send_buf_size": 4096, 00:21:39.288 "enable_recv_pipe": true, 00:21:39.288 "enable_quickack": false, 00:21:39.288 "enable_placement_id": 0, 00:21:39.288 "enable_zerocopy_send_server": true, 00:21:39.288 "enable_zerocopy_send_client": false, 00:21:39.288 "zerocopy_threshold": 0, 00:21:39.288 "tls_version": 0, 00:21:39.288 "enable_ktls": false 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "sock_impl_set_options", 00:21:39.288 "params": { 00:21:39.288 "impl_name": "posix", 00:21:39.288 "recv_buf_size": 2097152, 00:21:39.288 "send_buf_size": 2097152, 00:21:39.288 "enable_recv_pipe": true, 00:21:39.288 "enable_quickack": false, 00:21:39.288 "enable_placement_id": 0, 00:21:39.288 "enable_zerocopy_send_server": true, 00:21:39.288 "enable_zerocopy_send_client": false, 00:21:39.288 "zerocopy_threshold": 0, 00:21:39.288 "tls_version": 0, 00:21:39.288 "enable_ktls": false 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "sock_impl_set_options", 00:21:39.288 "params": { 00:21:39.288 "impl_name": "uring", 00:21:39.288 "recv_buf_size": 2097152, 00:21:39.288 "send_buf_size": 2097152, 00:21:39.288 "enable_recv_pipe": true, 00:21:39.288 "enable_quickack": false, 00:21:39.288 "enable_placement_id": 0, 00:21:39.288 "enable_zerocopy_send_server": false, 00:21:39.288 "enable_zerocopy_send_client": false, 00:21:39.288 "zerocopy_threshold": 0, 00:21:39.288 "tls_version": 0, 00:21:39.288 "enable_ktls": false 00:21:39.288 } 00:21:39.288 } 00:21:39.288 ] 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "subsystem": "vmd", 00:21:39.288 "config": [] 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "subsystem": "accel", 00:21:39.288 "config": [ 00:21:39.288 { 00:21:39.288 "method": "accel_set_options", 00:21:39.288 "params": { 00:21:39.288 "small_cache_size": 128, 00:21:39.288 "large_cache_size": 16, 00:21:39.288 "task_count": 2048, 00:21:39.288 "sequence_count": 2048, 00:21:39.288 "buf_count": 2048 00:21:39.288 } 00:21:39.288 } 00:21:39.288 ] 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "subsystem": "bdev", 00:21:39.288 "config": [ 00:21:39.288 { 00:21:39.288 "method": "bdev_set_options", 00:21:39.288 "params": { 00:21:39.288 "bdev_io_pool_size": 65535, 00:21:39.288 "bdev_io_cache_size": 256, 00:21:39.288 "bdev_auto_examine": true, 00:21:39.288 "iobuf_small_cache_size": 128, 00:21:39.288 "iobuf_large_cache_size": 16 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "bdev_raid_set_options", 00:21:39.288 "params": { 00:21:39.288 "process_window_size_kb": 1024, 00:21:39.288 "process_max_bandwidth_mb_sec": 0 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "bdev_iscsi_set_options", 00:21:39.288 "params": { 00:21:39.288 "timeout_sec": 30 00:21:39.288 } 00:21:39.288 }, 00:21:39.288 { 00:21:39.288 "method": "bdev_nvme_set_options", 00:21:39.288 "params": { 00:21:39.288 "action_on_timeout": "none", 00:21:39.288 "timeout_us": 0, 00:21:39.288 "timeout_admin_us": 0, 00:21:39.288 "keep_alive_timeout_ms": 10000, 00:21:39.288 "arbitration_burst": 0, 00:21:39.288 "low_priority_weight": 0, 00:21:39.288 "medium_priority_weight": 0, 00:21:39.288 "high_priority_weight": 0, 00:21:39.288 "nvme_adminq_poll_period_us": 10000, 00:21:39.288 "nvme_ioq_poll_period_us": 0, 00:21:39.288 "io_queue_requests": 512, 00:21:39.288 "delay_cmd_submit": true, 00:21:39.288 "transport_retry_count": 4, 00:21:39.288 "bdev_retry_count": 3, 00:21:39.288 "transport_ack_timeout": 0, 00:21:39.288 "ctrlr_loss_timeout_sec": 0, 00:21:39.288 "reconnect_delay_sec": 0, 00:21:39.288 "fast_io_fail_timeout_sec": 0, 00:21:39.288 "disable_auto_failback": false, 00:21:39.288 "generate_uuids": false, 00:21:39.288 "transport_tos": 0, 00:21:39.288 "nvme_error_stat": false, 00:21:39.288 "rdma_srq_size": 0, 00:21:39.288 "io_path_stat": false, 00:21:39.288 "allow_accel_sequence": false, 00:21:39.288 "rdma_max_cq_size": 0, 00:21:39.288 "rdma_cm_event_timeout_ms": 0, 00:21:39.288 "dhchap_digests": [ 00:21:39.288 "sha256", 00:21:39.288 "sha384", 00:21:39.288 "sha512" 00:21:39.289 ], 00:21:39.289 "dhchap_dhgroups": [ 00:21:39.289 "null", 00:21:39.289 "ffdhe2048", 00:21:39.289 "ffdhe3072", 00:21:39.289 "ffdhe4096", 00:21:39.289 "ffdhe6144", 00:21:39.289 "ffdhe8192" 00:21:39.289 ] 00:21:39.289 } 00:21:39.289 }, 00:21:39.289 { 00:21:39.289 "method": "bdev_nvme_attach_controller", 00:21:39.289 "params": { 00:21:39.289 "name": "nvme0", 00:21:39.289 "trtype": "TCP", 00:21:39.289 "adrfam": "IPv4", 00:21:39.289 "traddr": "127.0.0.1", 00:21:39.289 "trsvcid": "4420", 00:21:39.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.289 "prchk_reftag": false, 00:21:39.289 "prchk_guard": false, 00:21:39.289 "ctrlr_loss_timeout_sec": 0, 00:21:39.289 "reconnect_delay_sec": 0, 00:21:39.289 "fast_io_fail_timeout_sec": 0, 00:21:39.289 "psk": "key0", 00:21:39.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.289 "hdgst": false, 00:21:39.289 "ddgst": false, 00:21:39.289 "multipath": "multipath" 00:21:39.289 } 00:21:39.289 }, 00:21:39.289 { 00:21:39.289 "method": "bdev_nvme_set_hotplug", 00:21:39.289 "params": { 00:21:39.289 "period_us": 100000, 00:21:39.289 "enable": false 00:21:39.289 } 00:21:39.289 }, 00:21:39.289 { 00:21:39.289 "method": "bdev_wait_for_examine" 00:21:39.289 } 00:21:39.289 ] 00:21:39.289 }, 00:21:39.289 { 00:21:39.289 "subsystem": "nbd", 00:21:39.289 "config": [] 00:21:39.289 } 00:21:39.289 ] 00:21:39.289 }' 00:21:39.289 19:29:37 keyring_file -- keyring/file.sh@115 -- # killprocess 84937 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84937 ']' 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84937 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84937 00:21:39.289 killing process with pid 84937 00:21:39.289 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.289 00:21:39.289 Latency(us) 00:21:39.289 [2024-11-26T19:29:37.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.289 [2024-11-26T19:29:37.729Z] =================================================================================================================== 00:21:39.289 [2024-11-26T19:29:37.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84937' 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@973 -- # kill 84937 00:21:39.289 19:29:37 keyring_file -- common/autotest_common.sh@978 -- # wait 84937 00:21:39.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.548 19:29:37 keyring_file -- keyring/file.sh@118 -- # bperfpid=85195 00:21:39.548 19:29:37 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85195 /var/tmp/bperf.sock 00:21:39.548 19:29:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85195 ']' 00:21:39.548 19:29:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.548 19:29:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.548 19:29:37 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:39.548 19:29:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.548 19:29:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.548 19:29:37 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:39.548 "subsystems": [ 00:21:39.548 { 00:21:39.548 "subsystem": "keyring", 00:21:39.548 "config": [ 00:21:39.548 { 00:21:39.548 "method": "keyring_file_add_key", 00:21:39.548 "params": { 00:21:39.548 "name": "key0", 00:21:39.548 "path": "/tmp/tmp.jYbns2FOxA" 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "keyring_file_add_key", 00:21:39.548 "params": { 00:21:39.548 "name": "key1", 00:21:39.548 "path": "/tmp/tmp.p9n3eZqwNc" 00:21:39.548 } 00:21:39.548 } 00:21:39.548 ] 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "subsystem": "iobuf", 00:21:39.548 "config": [ 00:21:39.548 { 00:21:39.548 "method": "iobuf_set_options", 00:21:39.548 "params": { 00:21:39.548 "small_pool_count": 8192, 00:21:39.548 "large_pool_count": 1024, 00:21:39.548 "small_bufsize": 8192, 00:21:39.548 "large_bufsize": 135168, 00:21:39.548 "enable_numa": false 00:21:39.548 } 00:21:39.548 } 00:21:39.548 ] 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "subsystem": "sock", 00:21:39.548 "config": [ 00:21:39.548 { 00:21:39.548 "method": "sock_set_default_impl", 00:21:39.548 "params": { 00:21:39.548 "impl_name": "uring" 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "sock_impl_set_options", 00:21:39.548 "params": { 00:21:39.548 "impl_name": "ssl", 00:21:39.548 "recv_buf_size": 4096, 00:21:39.548 "send_buf_size": 4096, 00:21:39.548 "enable_recv_pipe": true, 00:21:39.548 "enable_quickack": false, 00:21:39.548 "enable_placement_id": 0, 00:21:39.548 "enable_zerocopy_send_server": true, 00:21:39.548 "enable_zerocopy_send_client": false, 00:21:39.548 "zerocopy_threshold": 0, 00:21:39.548 "tls_version": 0, 00:21:39.548 "enable_ktls": false 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "sock_impl_set_options", 00:21:39.548 "params": { 00:21:39.548 "impl_name": "posix", 00:21:39.548 "recv_buf_size": 2097152, 00:21:39.548 "send_buf_size": 2097152, 00:21:39.548 "enable_recv_pipe": true, 00:21:39.548 "enable_quickack": false, 00:21:39.548 "enable_placement_id": 0, 00:21:39.548 "enable_zerocopy_send_server": true, 00:21:39.548 "enable_zerocopy_send_client": false, 00:21:39.548 "zerocopy_threshold": 0, 00:21:39.548 "tls_version": 0, 00:21:39.548 "enable_ktls": false 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "sock_impl_set_options", 00:21:39.548 "params": { 00:21:39.548 "impl_name": "uring", 00:21:39.548 "recv_buf_size": 2097152, 00:21:39.548 "send_buf_size": 2097152, 00:21:39.548 "enable_recv_pipe": true, 00:21:39.548 "enable_quickack": false, 00:21:39.548 "enable_placement_id": 0, 00:21:39.548 "enable_zerocopy_send_server": false, 00:21:39.548 "enable_zerocopy_send_client": false, 00:21:39.548 "zerocopy_threshold": 0, 00:21:39.548 "tls_version": 0, 00:21:39.548 "enable_ktls": false 00:21:39.548 } 00:21:39.548 } 00:21:39.548 ] 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "subsystem": "vmd", 00:21:39.548 "config": [] 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "subsystem": "accel", 00:21:39.548 "config": [ 00:21:39.548 { 00:21:39.548 "method": "accel_set_options", 00:21:39.548 "params": { 00:21:39.548 "small_cache_size": 128, 00:21:39.548 "large_cache_size": 16, 00:21:39.548 "task_count": 2048, 00:21:39.548 "sequence_count": 2048, 00:21:39.548 "buf_count": 2048 00:21:39.548 } 00:21:39.548 } 00:21:39.548 ] 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "subsystem": "bdev", 00:21:39.548 "config": [ 00:21:39.548 { 00:21:39.548 "method": "bdev_set_options", 00:21:39.548 "params": { 00:21:39.548 "bdev_io_pool_size": 65535, 00:21:39.548 "bdev_io_cache_size": 256, 00:21:39.548 "bdev_auto_examine": true, 00:21:39.548 "iobuf_small_cache_size": 128, 00:21:39.548 "iobuf_large_cache_size": 16 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "bdev_raid_set_options", 00:21:39.548 "params": { 00:21:39.548 "process_window_size_kb": 1024, 00:21:39.548 "process_max_bandwidth_mb_sec": 0 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "bdev_iscsi_set_options", 00:21:39.548 "params": { 00:21:39.548 "timeout_sec": 30 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "bdev_nvme_set_options", 00:21:39.548 "params": { 00:21:39.548 "action_on_timeout": "none", 00:21:39.548 "timeout_us": 0, 00:21:39.548 "timeout_admin_us": 0, 00:21:39.548 "keep_alive_timeout_ms": 10000, 00:21:39.548 "arbitration_burst": 0, 00:21:39.548 "low_priority_weight": 0, 00:21:39.548 "medium_priority_weight": 0, 00:21:39.548 "high_priority_weight": 0, 00:21:39.548 "nvme_adminq_poll_period_us": 10000, 00:21:39.548 "nvme_ioq_poll_period_us": 0, 00:21:39.548 "io_queue_requests": 512, 00:21:39.548 "delay_cmd_submit": true, 00:21:39.548 "transport_retry_count": 4, 00:21:39.548 "bdev_retry_count": 3, 00:21:39.548 "transport_ack_timeout": 0, 00:21:39.548 "ctrlr_loss_timeout_sec": 0, 00:21:39.548 "reconnect_delay_sec": 0, 00:21:39.548 "fast_io_fail_timeout_sec": 0, 00:21:39.548 "disable_auto_failback": false, 00:21:39.548 "generate_uuids": false, 00:21:39.548 "transport_tos": 0, 00:21:39.548 "nvme_error_stat": false, 00:21:39.548 "rdma_srq_size": 0, 00:21:39.548 "io_path_stat": false, 00:21:39.548 "allow_accel_sequence": false, 00:21:39.548 "rdma_max_cq_size": 0, 00:21:39.548 "rdma_cm_event_timeout_ms": 0, 00:21:39.548 "dhchap_digests": [ 00:21:39.548 "sha256", 00:21:39.548 "sha384", 00:21:39.548 "sha512" 00:21:39.548 ], 00:21:39.548 "dhchap_dhgroups": [ 00:21:39.548 "null", 00:21:39.548 "ffdhe2048", 00:21:39.548 "ffdhe3072", 00:21:39.548 "ffdhe4096", 00:21:39.548 "ffdhe6144", 00:21:39.548 "ffdhe8192" 00:21:39.548 ] 00:21:39.548 } 00:21:39.548 }, 00:21:39.548 { 00:21:39.548 "method": "bdev_nvme_attach_controller", 00:21:39.548 "params": { 00:21:39.548 "name": "nvme0", 00:21:39.548 "trtype": "TCP", 00:21:39.548 "adrfam": "IPv4", 00:21:39.548 "traddr": "127.0.0.1", 00:21:39.548 "trsvcid": "4420", 00:21:39.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.548 "prchk_reftag": false, 00:21:39.548 "prchk_guard": false, 00:21:39.548 "ctrlr_loss_timeout_sec": 0, 00:21:39.548 "reconnect_delay_sec": 0, 00:21:39.548 "fast_io_fail_timeout_sec": 0, 00:21:39.549 "psk": "key0", 00:21:39.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.549 "hdgst": false, 00:21:39.549 "ddgst": false, 00:21:39.549 "multipath": "multipath" 00:21:39.549 } 00:21:39.549 }, 00:21:39.549 { 00:21:39.549 "method": "bdev_nvme_set_hotplug", 00:21:39.549 "params": { 00:21:39.549 "period_us": 100000, 00:21:39.549 "enable": false 00:21:39.549 } 00:21:39.549 }, 00:21:39.549 { 00:21:39.549 "method": "bdev_wait_for_examine" 00:21:39.549 } 00:21:39.549 ] 00:21:39.549 }, 00:21:39.549 { 00:21:39.549 "subsystem": "nbd", 00:21:39.549 "config": [] 00:21:39.549 } 00:21:39.549 ] 00:21:39.549 }' 00:21:39.549 19:29:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:39.549 [2024-11-26 19:29:37.847079] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:39.549 [2024-11-26 19:29:37.847464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85195 ] 00:21:39.807 [2024-11-26 19:29:37.991945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.807 [2024-11-26 19:29:38.048842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.807 [2024-11-26 19:29:38.184049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.807 [2024-11-26 19:29:38.239990] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.375 19:29:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.375 19:29:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:40.375 19:29:38 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:40.633 19:29:38 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:40.633 19:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.906 19:29:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:40.906 19:29:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:40.906 19:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:40.906 19:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:40.906 19:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.906 19:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:40.906 19:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.176 19:29:39 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:41.176 19:29:39 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:41.176 19:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:41.176 19:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.176 19:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.176 19:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:41.176 19:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.434 19:29:39 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:41.434 19:29:39 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:41.434 19:29:39 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:41.434 19:29:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:41.692 19:29:39 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:41.692 19:29:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:41.692 19:29:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jYbns2FOxA /tmp/tmp.p9n3eZqwNc 00:21:41.692 19:29:39 keyring_file -- keyring/file.sh@20 -- # killprocess 85195 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85195 ']' 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85195 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85195 00:21:41.692 killing process with pid 85195 00:21:41.692 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.692 00:21:41.692 Latency(us) 00:21:41.692 [2024-11-26T19:29:40.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.692 [2024-11-26T19:29:40.132Z] =================================================================================================================== 00:21:41.692 [2024-11-26T19:29:40.132Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85195' 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@973 -- # kill 85195 00:21:41.692 19:29:39 keyring_file -- common/autotest_common.sh@978 -- # wait 85195 00:21:41.950 19:29:40 keyring_file -- keyring/file.sh@21 -- # killprocess 84920 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84920 ']' 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84920 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84920 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.950 killing process with pid 84920 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84920' 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@973 -- # kill 84920 00:21:41.950 19:29:40 keyring_file -- common/autotest_common.sh@978 -- # wait 84920 00:21:42.208 ************************************ 00:21:42.208 END TEST keyring_file 00:21:42.208 ************************************ 00:21:42.208 00:21:42.208 real 0m16.800s 00:21:42.208 user 0m42.166s 00:21:42.208 sys 0m3.108s 00:21:42.208 19:29:40 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.208 19:29:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.208 19:29:40 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:42.208 19:29:40 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:42.208 19:29:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.208 19:29:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.208 19:29:40 -- common/autotest_common.sh@10 -- # set +x 00:21:42.208 ************************************ 00:21:42.208 START TEST keyring_linux 00:21:42.208 ************************************ 00:21:42.208 19:29:40 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:42.208 Joined session keyring: 667964589 00:21:42.466 * Looking for test storage... 00:21:42.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:42.466 19:29:40 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:42.466 19:29:40 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:21:42.466 19:29:40 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.467 --rc genhtml_branch_coverage=1 00:21:42.467 --rc genhtml_function_coverage=1 00:21:42.467 --rc genhtml_legend=1 00:21:42.467 --rc geninfo_all_blocks=1 00:21:42.467 --rc geninfo_unexecuted_blocks=1 00:21:42.467 00:21:42.467 ' 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.467 --rc genhtml_branch_coverage=1 00:21:42.467 --rc genhtml_function_coverage=1 00:21:42.467 --rc genhtml_legend=1 00:21:42.467 --rc geninfo_all_blocks=1 00:21:42.467 --rc geninfo_unexecuted_blocks=1 00:21:42.467 00:21:42.467 ' 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.467 --rc genhtml_branch_coverage=1 00:21:42.467 --rc genhtml_function_coverage=1 00:21:42.467 --rc genhtml_legend=1 00:21:42.467 --rc geninfo_all_blocks=1 00:21:42.467 --rc geninfo_unexecuted_blocks=1 00:21:42.467 00:21:42.467 ' 00:21:42.467 19:29:40 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:42.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.467 --rc genhtml_branch_coverage=1 00:21:42.467 --rc genhtml_function_coverage=1 00:21:42.467 --rc genhtml_legend=1 00:21:42.467 --rc geninfo_all_blocks=1 00:21:42.467 --rc geninfo_unexecuted_blocks=1 00:21:42.467 00:21:42.467 ' 00:21:42.467 19:29:40 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:42.467 19:29:40 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:560f6fb4-1392-4f8a-a310-a32d17cc4390 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=560f6fb4-1392-4f8a-a310-a32d17cc4390 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.467 19:29:40 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.467 19:29:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.467 19:29:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.467 19:29:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.467 19:29:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:42.467 19:29:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.467 19:29:40 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:42.468 /tmp/:spdk-test:key0 00:21:42.468 19:29:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:42.468 19:29:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:42.468 19:29:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:42.726 19:29:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:42.726 19:29:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:42.726 /tmp/:spdk-test:key1 00:21:42.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.726 19:29:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85318 00:21:42.726 19:29:40 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:42.726 19:29:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85318 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85318 ']' 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.726 19:29:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 [2024-11-26 19:29:41.005921] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:42.726 [2024-11-26 19:29:41.006234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85318 ] 00:21:42.726 [2024-11-26 19:29:41.155007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.993 [2024-11-26 19:29:41.219695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.993 [2024-11-26 19:29:41.290854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.560 19:29:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.560 19:29:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:43.560 19:29:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:43.560 19:29:41 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.560 19:29:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:43.560 [2024-11-26 19:29:41.969169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.560 null0 00:21:43.819 [2024-11-26 19:29:42.001166] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.819 [2024-11-26 19:29:42.001365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:43.819 19:29:42 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.819 19:29:42 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:43.819 196533488 00:21:43.819 19:29:42 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:43.819 302304364 00:21:43.819 19:29:42 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85342 00:21:43.820 19:29:42 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:43.820 19:29:42 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85342 /var/tmp/bperf.sock 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85342 ']' 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.820 19:29:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:43.820 [2024-11-26 19:29:42.088371] Starting SPDK v25.01-pre git sha1 67afc973b / DPDK 24.03.0 initialization... 00:21:43.820 [2024-11-26 19:29:42.088471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85342 ] 00:21:43.820 [2024-11-26 19:29:42.234645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.078 [2024-11-26 19:29:42.289409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.014 19:29:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.014 19:29:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:45.015 19:29:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:45.015 19:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:45.015 19:29:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:45.015 19:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:45.273 [2024-11-26 19:29:43.607020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.273 19:29:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:45.273 19:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:45.531 [2024-11-26 19:29:43.883235] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.531 nvme0n1 00:21:45.789 19:29:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:45.789 19:29:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:45.789 19:29:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:45.789 19:29:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:45.789 19:29:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:45.789 19:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.789 19:29:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:45.789 19:29:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:45.789 19:29:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:45.789 19:29:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:45.789 19:29:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.789 19:29:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.789 19:29:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@25 -- # sn=196533488 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 196533488 == \1\9\6\5\3\3\4\8\8 ]] 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 196533488 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:46.356 19:29:44 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:46.356 Running I/O for 1 seconds... 00:21:47.336 14112.00 IOPS, 55.12 MiB/s 00:21:47.336 Latency(us) 00:21:47.336 [2024-11-26T19:29:45.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.336 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:47.336 nvme0n1 : 1.01 14122.20 55.16 0.00 0.00 9021.59 6196.13 15490.33 00:21:47.336 [2024-11-26T19:29:45.776Z] =================================================================================================================== 00:21:47.336 [2024-11-26T19:29:45.776Z] Total : 14122.20 55.16 0.00 0.00 9021.59 6196.13 15490.33 00:21:47.336 { 00:21:47.336 "results": [ 00:21:47.336 { 00:21:47.336 "job": "nvme0n1", 00:21:47.336 "core_mask": "0x2", 00:21:47.336 "workload": "randread", 00:21:47.336 "status": "finished", 00:21:47.336 "queue_depth": 128, 00:21:47.336 "io_size": 4096, 00:21:47.336 "runtime": 1.008412, 00:21:47.336 "iops": 14122.204019785564, 00:21:47.336 "mibps": 55.16485945228736, 00:21:47.336 "io_failed": 0, 00:21:47.336 "io_timeout": 0, 00:21:47.336 "avg_latency_us": 9021.594072939208, 00:21:47.336 "min_latency_us": 6196.130909090909, 00:21:47.336 "max_latency_us": 15490.327272727272 00:21:47.336 } 00:21:47.336 ], 00:21:47.336 "core_count": 1 00:21:47.336 } 00:21:47.336 19:29:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:47.336 19:29:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:47.594 19:29:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:47.594 19:29:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:47.594 19:29:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:47.594 19:29:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:47.594 19:29:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.594 19:29:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:47.852 19:29:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:47.852 19:29:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:47.852 19:29:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:47.852 19:29:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.852 19:29:46 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:47.852 19:29:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:48.112 [2024-11-26 19:29:46.474697] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.112 [2024-11-26 19:29:46.475022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a55d0 (107): Transport endpoint is not connected 00:21:48.112 [2024-11-26 19:29:46.476013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a55d0 (9): Bad file descriptor 00:21:48.112 [2024-11-26 19:29:46.477024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:48.112 [2024-11-26 19:29:46.477251] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:48.112 [2024-11-26 19:29:46.477267] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:48.112 [2024-11-26 19:29:46.477280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:48.112 request: 00:21:48.112 { 00:21:48.112 "name": "nvme0", 00:21:48.112 "trtype": "tcp", 00:21:48.112 "traddr": "127.0.0.1", 00:21:48.112 "adrfam": "ipv4", 00:21:48.112 "trsvcid": "4420", 00:21:48.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:48.112 "prchk_reftag": false, 00:21:48.112 "prchk_guard": false, 00:21:48.112 "hdgst": false, 00:21:48.112 "ddgst": false, 00:21:48.112 "psk": ":spdk-test:key1", 00:21:48.112 "allow_unrecognized_csi": false, 00:21:48.112 "method": "bdev_nvme_attach_controller", 00:21:48.112 "req_id": 1 00:21:48.112 } 00:21:48.112 Got JSON-RPC error response 00:21:48.112 response: 00:21:48.112 { 00:21:48.112 "code": -5, 00:21:48.112 "message": "Input/output error" 00:21:48.112 } 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@33 -- # sn=196533488 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 196533488 00:21:48.112 1 links removed 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@33 -- # sn=302304364 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 302304364 00:21:48.112 1 links removed 00:21:48.112 19:29:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85342 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85342 ']' 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85342 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.112 19:29:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85342 00:21:48.371 killing process with pid 85342 00:21:48.371 Received shutdown signal, test time was about 1.000000 seconds 00:21:48.371 00:21:48.371 Latency(us) 00:21:48.371 [2024-11-26T19:29:46.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.371 [2024-11-26T19:29:46.811Z] =================================================================================================================== 00:21:48.371 [2024-11-26T19:29:46.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85342' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 85342 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 85342 00:21:48.371 19:29:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85318 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85318 ']' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85318 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85318 00:21:48.371 killing process with pid 85318 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85318' 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 85318 00:21:48.371 19:29:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 85318 00:21:48.938 ************************************ 00:21:48.938 END TEST keyring_linux 00:21:48.938 ************************************ 00:21:48.938 00:21:48.938 real 0m6.550s 00:21:48.938 user 0m12.673s 00:21:48.938 sys 0m1.618s 00:21:48.938 19:29:47 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.938 19:29:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:48.938 19:29:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:48.938 19:29:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:48.938 19:29:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:48.938 19:29:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:48.938 19:29:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:48.938 19:29:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:48.938 19:29:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:48.938 19:29:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.938 19:29:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.938 19:29:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:48.938 19:29:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:48.938 19:29:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:48.938 19:29:47 -- common/autotest_common.sh@10 -- # set +x 00:21:50.838 INFO: APP EXITING 00:21:50.838 INFO: killing all VMs 00:21:50.838 INFO: killing vhost app 00:21:50.838 INFO: EXIT DONE 00:21:51.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.404 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:51.404 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:51.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.234 Cleaning 00:21:52.234 Removing: /var/run/dpdk/spdk0/config 00:21:52.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:52.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:52.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:52.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:52.234 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:52.234 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:52.234 Removing: /var/run/dpdk/spdk1/config 00:21:52.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:52.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:52.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:52.234 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:52.234 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:52.234 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:52.234 Removing: /var/run/dpdk/spdk2/config 00:21:52.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:52.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:52.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:52.234 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:52.234 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:52.234 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:52.234 Removing: /var/run/dpdk/spdk3/config 00:21:52.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:52.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:52.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:52.234 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:52.234 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:52.234 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:52.234 Removing: /var/run/dpdk/spdk4/config 00:21:52.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:52.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:52.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:52.234 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:52.234 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:52.234 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:52.234 Removing: /dev/shm/nvmf_trace.0 00:21:52.234 Removing: /dev/shm/spdk_tgt_trace.pid56638 00:21:52.234 Removing: /var/run/dpdk/spdk0 00:21:52.234 Removing: /var/run/dpdk/spdk1 00:21:52.234 Removing: /var/run/dpdk/spdk2 00:21:52.234 Removing: /var/run/dpdk/spdk3 00:21:52.234 Removing: /var/run/dpdk/spdk4 00:21:52.234 Removing: /var/run/dpdk/spdk_pid56485 00:21:52.234 Removing: /var/run/dpdk/spdk_pid56638 00:21:52.234 Removing: /var/run/dpdk/spdk_pid56836 00:21:52.234 Removing: /var/run/dpdk/spdk_pid56923 00:21:52.234 Removing: /var/run/dpdk/spdk_pid56950 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57060 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57070 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57210 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57405 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57558 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57632 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57708 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57800 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57872 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57911 00:21:52.234 Removing: /var/run/dpdk/spdk_pid57947 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58011 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58094 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58538 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58583 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58626 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58635 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58702 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58710 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58777 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58799 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58844 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58855 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58899 00:21:52.234 Removing: /var/run/dpdk/spdk_pid58911 00:21:52.234 Removing: /var/run/dpdk/spdk_pid59041 00:21:52.234 Removing: /var/run/dpdk/spdk_pid59077 00:21:52.234 Removing: /var/run/dpdk/spdk_pid59159 00:21:52.234 Removing: /var/run/dpdk/spdk_pid59486 00:21:52.234 Removing: /var/run/dpdk/spdk_pid59498 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59534 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59548 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59569 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59588 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59601 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59617 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59636 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59655 00:21:52.493 Removing: /var/run/dpdk/spdk_pid59665 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59684 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59703 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59718 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59737 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59751 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59772 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59791 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59799 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59820 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59856 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59864 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59899 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59971 00:21:52.494 Removing: /var/run/dpdk/spdk_pid59994 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60009 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60032 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60047 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60049 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60097 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60105 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60139 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60143 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60160 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60164 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60179 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60183 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60199 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60203 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60237 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60258 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60273 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60296 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60311 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60313 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60359 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60366 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60397 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60405 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60412 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60420 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60427 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60435 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60442 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60452 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60535 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60582 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60695 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60734 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60779 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60793 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60810 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60830 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60867 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60877 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60956 00:21:52.494 Removing: /var/run/dpdk/spdk_pid60985 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61022 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61079 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61141 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61173 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61272 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61314 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61352 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61579 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61676 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61710 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61734 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61773 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61808 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61840 00:21:52.494 Removing: /var/run/dpdk/spdk_pid61872 00:21:52.494 Removing: /var/run/dpdk/spdk_pid62261 00:21:52.494 Removing: /var/run/dpdk/spdk_pid62299 00:21:52.494 Removing: /var/run/dpdk/spdk_pid62643 00:21:52.494 Removing: /var/run/dpdk/spdk_pid63108 00:21:52.753 Removing: /var/run/dpdk/spdk_pid63388 00:21:52.753 Removing: /var/run/dpdk/spdk_pid64233 00:21:52.753 Removing: /var/run/dpdk/spdk_pid65154 00:21:52.753 Removing: /var/run/dpdk/spdk_pid65277 00:21:52.753 Removing: /var/run/dpdk/spdk_pid65339 00:21:52.753 Removing: /var/run/dpdk/spdk_pid66752 00:21:52.753 Removing: /var/run/dpdk/spdk_pid67068 00:21:52.753 Removing: /var/run/dpdk/spdk_pid70769 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71116 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71226 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71359 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71381 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71404 00:21:52.753 Removing: /var/run/dpdk/spdk_pid71426 00:21:52.754 Removing: /var/run/dpdk/spdk_pid71524 00:21:52.754 Removing: /var/run/dpdk/spdk_pid71652 00:21:52.754 Removing: /var/run/dpdk/spdk_pid71803 00:21:52.754 Removing: /var/run/dpdk/spdk_pid71879 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72073 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72141 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72227 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72574 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72980 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72981 00:21:52.754 Removing: /var/run/dpdk/spdk_pid72982 00:21:52.754 Removing: /var/run/dpdk/spdk_pid73243 00:21:52.754 Removing: /var/run/dpdk/spdk_pid73500 00:21:52.754 Removing: /var/run/dpdk/spdk_pid73880 00:21:52.754 Removing: /var/run/dpdk/spdk_pid73882 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74205 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74219 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74239 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74268 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74274 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74624 00:21:52.754 Removing: /var/run/dpdk/spdk_pid74667 00:21:52.754 Removing: /var/run/dpdk/spdk_pid75007 00:21:52.754 Removing: /var/run/dpdk/spdk_pid75210 00:21:52.754 Removing: /var/run/dpdk/spdk_pid75632 00:21:52.754 Removing: /var/run/dpdk/spdk_pid76184 00:21:52.754 Removing: /var/run/dpdk/spdk_pid77052 00:21:52.754 Removing: /var/run/dpdk/spdk_pid77683 00:21:52.754 Removing: /var/run/dpdk/spdk_pid77685 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79683 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79730 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79790 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79844 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79943 00:21:52.754 Removing: /var/run/dpdk/spdk_pid79996 00:21:52.754 Removing: /var/run/dpdk/spdk_pid80044 00:21:52.754 Removing: /var/run/dpdk/spdk_pid80097 00:21:52.754 Removing: /var/run/dpdk/spdk_pid80466 00:21:52.754 Removing: /var/run/dpdk/spdk_pid81676 00:21:52.754 Removing: /var/run/dpdk/spdk_pid81822 00:21:52.754 Removing: /var/run/dpdk/spdk_pid82059 00:21:52.754 Removing: /var/run/dpdk/spdk_pid82671 00:21:52.754 Removing: /var/run/dpdk/spdk_pid82831 00:21:52.754 Removing: /var/run/dpdk/spdk_pid82988 00:21:52.754 Removing: /var/run/dpdk/spdk_pid83086 00:21:52.754 Removing: /var/run/dpdk/spdk_pid83245 00:21:52.754 Removing: /var/run/dpdk/spdk_pid83354 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84054 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84088 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84125 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84382 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84417 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84447 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84920 00:21:52.754 Removing: /var/run/dpdk/spdk_pid84937 00:21:52.754 Removing: /var/run/dpdk/spdk_pid85195 00:21:52.754 Removing: /var/run/dpdk/spdk_pid85318 00:21:52.754 Removing: /var/run/dpdk/spdk_pid85342 00:21:52.754 Clean 00:21:53.012 19:29:51 -- common/autotest_common.sh@1453 -- # return 0 00:21:53.012 19:29:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:53.012 19:29:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.012 19:29:51 -- common/autotest_common.sh@10 -- # set +x 00:21:53.012 19:29:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:53.012 19:29:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.013 19:29:51 -- common/autotest_common.sh@10 -- # set +x 00:21:53.013 19:29:51 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:53.013 19:29:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:53.013 19:29:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:53.013 19:29:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:53.013 19:29:51 -- spdk/autotest.sh@398 -- # hostname 00:21:53.013 19:29:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:53.271 geninfo: WARNING: invalid characters removed from testname! 00:22:19.812 19:30:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.121 19:30:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:25.653 19:30:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:28.936 19:30:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:31.469 19:30:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:34.059 19:30:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.590 19:30:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:36.590 19:30:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:36.590 19:30:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:36.590 19:30:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:36.590 19:30:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:36.590 19:30:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:36.590 + [[ -n 5195 ]] 00:22:36.590 + sudo kill 5195 00:22:36.598 [Pipeline] } 00:22:36.613 [Pipeline] // timeout 00:22:36.617 [Pipeline] } 00:22:36.631 [Pipeline] // stage 00:22:36.637 [Pipeline] } 00:22:36.648 [Pipeline] // catchError 00:22:36.655 [Pipeline] stage 00:22:36.656 [Pipeline] { (Stop VM) 00:22:36.668 [Pipeline] sh 00:22:36.947 + vagrant halt 00:22:40.232 ==> default: Halting domain... 00:22:46.826 [Pipeline] sh 00:22:47.105 + vagrant destroy -f 00:22:50.388 ==> default: Removing domain... 00:22:50.660 [Pipeline] sh 00:22:50.941 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:22:50.950 [Pipeline] } 00:22:50.962 [Pipeline] // stage 00:22:50.967 [Pipeline] } 00:22:50.978 [Pipeline] // dir 00:22:50.982 [Pipeline] } 00:22:50.994 [Pipeline] // wrap 00:22:50.999 [Pipeline] } 00:22:51.008 [Pipeline] // catchError 00:22:51.015 [Pipeline] stage 00:22:51.016 [Pipeline] { (Epilogue) 00:22:51.025 [Pipeline] sh 00:22:51.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:57.876 [Pipeline] catchError 00:22:57.878 [Pipeline] { 00:22:57.890 [Pipeline] sh 00:22:58.169 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:58.169 Artifacts sizes are good 00:22:58.179 [Pipeline] } 00:22:58.192 [Pipeline] // catchError 00:22:58.201 [Pipeline] archiveArtifacts 00:22:58.207 Archiving artifacts 00:22:58.345 [Pipeline] cleanWs 00:22:58.359 [WS-CLEANUP] Deleting project workspace... 00:22:58.359 [WS-CLEANUP] Deferred wipeout is used... 00:22:58.392 [WS-CLEANUP] done 00:22:58.394 [Pipeline] } 00:22:58.412 [Pipeline] // stage 00:22:58.418 [Pipeline] } 00:22:58.435 [Pipeline] // node 00:22:58.442 [Pipeline] End of Pipeline 00:22:58.484 Finished: SUCCESS